Sundar Pichai on Gemini, AI progress and more

[ad_1]

Infrastructure for the AI period: Introducing Trillium

Coaching state-of-the-art fashions requires a variety of computing energy. Business demand for ML compute has grown by an element of 1 million within the final six years. And yearly, it will increase tenfold.

Google was constructed for this. For 25 years, we’ve invested in world-class technical infrastructure. From the cutting-edge {hardware} that powers Search, to our customized tensor processing items that energy our AI advances.

Gemini was educated and served fully on our fourth and fifth era TPUs. And different main AI corporations, together with Anthropic, have educated their fashions on TPUs as nicely.

At the moment, we’re excited to announce our sixth era of TPUs, referred to as Trillium. Trillium is our most performant and best TPU so far, delivering a 4.7x enchancment in compute efficiency per chip over the earlier era, TPU v5e.

We’ll make Trillium obtainable to our Cloud prospects in late 2024.

Alongside our TPUs, we’re proud to supply CPUs and GPUs to help any workload. That features the brand new Axion processors we introduced final month, our first customized Arm-based CPU that delivers industry-leading efficiency and power effectivity.

We’re additionally proud to be one of many first Cloud suppliers to supply Nvidia’s cutting-edge Blackwell GPUs, obtainable in early 2025. We’re lucky to have a longstanding partnership with NVIDIA, and are excited to convey Blackwell’s breakthrough capabilities to our prospects.

Chips are a foundational a part of our built-in end-to-end system. From performance-optimized {hardware} and open software program to versatile consumption fashions. This all comes collectively in our AI Hypercomputer, a groundbreaking supercomputer structure.

Companies and builders are utilizing it to deal with extra advanced challenges, with greater than twice the effectivity relative to only shopping for the uncooked {hardware} and chips. Our AI Hypercomputer developments are made doable partially due to our method to liquid cooling in our knowledge facilities.

We’ve been doing this for almost a decade, lengthy earlier than it turned state-of-the-art for the {industry}. And at present our whole deployed fleet capability for liquid cooling methods is sort of 1 gigawatt and rising — that’s near 70 occasions the capability of another fleet.

Underlying that is the sheer scale of our community, which connects our infrastructure globally. Our community spans greater than 2 million miles of terrestrial and subsea fiber: over 10 occasions (!) the attain of the following main cloud supplier.

We are going to maintain making the investments essential to advance AI innovation and ship state-of-the-art capabilities.

[ad_2]

Source link

Exit mobile version