Jump to Content
Infrastructure Modernization

Google Cloud infrastructure enhancements tailored for your workloads

October 12, 2022
https://storage.googleapis.com/gweb-cloudblog-publish/images/Infra_Rollup_1.max-2600x2600.jpg
Sachin Gupta

Vice President & GM, Infrastructure, Google Cloud

Michael Weingartner

VP/GM Cloud Application Ecosystem

For far too long, cloud infrastructure has focused on raw speeds and feeds of building blocks such as VMs, containers, networks, and storage. Today, Moore’s law is slowing, and the burden of picking the right combination of infrastructure components increasingly falls on IT. 

At Google Cloud we are committed to removing that burden. We’ve engineered golden paths from silicon to the console, with a recipe of purpose-built infrastructure, prescriptive architectures, and an ecosystem to deliver workload-optimized infrastructure. And at this year’s Google Cloud Next, we made some exciting new announcements across key workloads.

In this post, we will put the new Google Cloud releases and capabilities in the context of popular workloads: From AI/ML to high performance computing and data analytics, to SAP, VMware, and mainframes.  

Powering AI/ML workloads 

No single technology has the potential to drive more transformation than AI and ML. At Next, we announced several new AI-based services: Translation Hub and DocAI services, and the OpenXLA Project, an open-source ecosystem of ML compiler technologies co-developed by a consortium of industry leaders.

To build these innovative services, you also need a powerful infrastructure platform. Today, we announced the following additions to our compute offerings:

  • Cloud TPU v4 Pods - Now in General Availability (GA), Google’s advanced machine learning infrastructure is based on the world’s largest publicly available ML hub in Oklahoma and offers up to 9 exaflops of peak aggregate compute. Developers and researchers can use TPU v4 to train increasingly sophisticated models to power workloads such as large-scale natural language processing (NLP), recommendation systems, and computer vision algorithms in a cost effective and sustainable way. In June 2022, Cloud TPU v4 recorded the fastest training times on five MLPerf 2.0 benchmarks, with up to 80% better performance and up to 50% lower cost of training than the next best available alternative. 

  • A2 Ultra GPU VM instances - Now GA, you can use A2 Ultra GPU VM instances with Compute Engine, Google Kubernetes Engine and Deep Learning VMs. A2 Ultra GPUs have the largest and fastest GPU memory of the Google Cloud portfolio and are optimized for large AI/ML and HPC workloads with use cases such as AI assistants, recommendation systems, and autonomous driving. Powered by NVIDIA A100 Tensorcore GPU with 80 GBs of GPU memory, A2 Ultra delivers 25% higher throughput on inference and 2x higher performance on HPC simulations than original A2 machine shapes. 

Learn more about A2 Ultra and hear how you can accelerate your ML development and learn from our customers Uber and Cohere in breakout session MOD300.

Boost HPC and data-intensive workloads

Customers rely on Google Cloud to help them run data-intensive workloads such as HPC and Hadoop. The new C3 machine series (currently in Preview) is the first in the public cloud to include the new 4th Gen Intel Xeon processor and Google’s custom Intel Infrastructure Processing Unit that enables 200Gbps, low-latency networking. Learn more about top HPC best practices in breakout session MOD106.

You can pair the C3 with Hyperdisk (currently in Private Preview), our new generation block storage, which offers 80% higher IOPS per vCPU for high-end database management system (DBMS) workloads when compared to other hyperscalers. Data workloads such as Hadoop and databases may no longer need oversized compute instances to benefit from high IOPS. Learn more about Hyperdisk in breakout session MOD206

Enable new streaming experiences

Media and entertainment customers want streaming optimized workloads that build on our innovation in edge and our global network. Here are some new developments to support streaming use cases:

  • Cloud CDN, our original content delivery networking offering, now offers Dynamic compression to reduce the size of responses transferred from the edge to a client, significantly helping to accelerate page load times and reduce egress traffic. 

  • Media CDN, introduced earlier this year, now supports the Live Stream API to ingest and package source content into HTTP-Live Streaming and DASH formats for optimized live streaming. We are also enabling two new developer-friendly integrations in Preview for Media CDN: Dynamic Ad Insertion with Google Ad Manager, which provides customized video ad placements, and third-party Ad Insertion using our Video Stitcher API for personalized ad placement. With these options, content producers can introduce additional monetization and personalization opportunities to their streaming services. 

  • For advanced customization, we are introducing the Preview of Network Actions for Media CDN, a fully managed serverless solution based on open-source web assembly that enables programmability for customers to deploy their own code directly in the request/response path at the edge. 

With Media CDN, customers like Paramount+ are able to deliver a high-quality experience on the same Google infrastructure we’ve tested and tuned to serve YouTube’s 2 billion users globally. 

“Streaming is one of the key growth areas for Paramount Global. When we migrated traffic onto Media CDN, we observed consistently superior performance and offload metrics. Partnering with Google Cloud enables us to provide our subscribers with the highest quality viewing experience.” — Chris Xiques, SVP of Video Technology Group, Paramount Global

Bringing Google Cloud to your workloads 

For customers in regulated markets and in countries with strict sovereignty laws, Google Cloud has a spectrum of offerings to help them achieve varying degrees of sovereignty. Sovereign Controls by T-Systems is now GA, and Local Controls by S3NS, the Thales-Google partnership, is now available in Preview. You can expect more region and market announcements in the coming months. And for the most stringent sovereignty needs, we offer Google Distributed Cloud Hosted for a disconnected Google Cloud footprint deployed at the customer’s chosen site. 

We are also expanding functionality for existing offerings to give you even more options to run your cloud where you want, how you want. 

  • Anthos, our cloud-centric container platform to run modern apps anywhere consistently at scale, now has a more robust user interface and an upgraded fleet management experience. Create, update, and reconfigure your Anthos clusters the same way from one dashboard or command-line interface, wherever your clusters run. New fleet management capabilities let you manage growing fleets of container clusters across clouds, on-premises, and at the edge and for different use cases (isolate dev from prod, apply fleet-specific security controls, enforce configurations fleet-wide). And we are pleased to announce the GA of virtual machine support on Anthos clusters for retail edge environments. Learn more about how Anthos can help you run modern applications anywhere in breakout session MOD208.

  • Google Distributed Cloud Edge GPU-Optimized Config, is now GA in server-rack form factors powered by 12 Nvidia T4 GPUs. GDC Edge is designed to enable low-latency and high-performance hybrid workloads as a complement to your primary Google Cloud environment or as an independent edge deployment. We’re seeing early adoption by customers and partners for workloads such as augmented reality and retail self-checkout. In addition, software partners such as 66degrees are taking advantage of GDC Edge GPU optimization to provide real-time insights on in-store product availability, while Ipsotek is using machine intelligence at the edge to perform crowd and foot-traffic analysis in large locations such as malls, airports and railway stations. Learn more about GDC Edge and new partner validated solutions here, or watch breakout session MOD207 to hear how you can modernize your data center and accelerate your edge.

Infrastructure building blocks for cloud-first workloads

For most new projects, developers prefer them to be cloud-first optimized workloads, and nearly half of all developers use containers today. Google Kubernetes Engine (GKE) is the most automated and scalable container management service on the market today, and when you use GKE Autopilot, developers can get started faster than other leading cloud providers. At Next ‘22, we’re excited to unveil:

  • A new workshop to help you discover how to unlock efficiency and innovation with a GKE Autopilot — sign up to get started. 

  • A new security posture management dashboard in GKE that provides opinionated guidance on Kubernetes security along with bundled tools to expertly help improve the security posture of your Kubernetes clusters and workloads.

When building scalable web applications or running background jobs that require lightweight batch processing, developers are increasingly turning to serverless technologies. We’re helping developers choose serverless with numerous updates to Cloud Run, our managed serverless compute service. 

  • With new Cloud Run integrations, Cloud Run and Google Cloud services work together smoothly. For example, configuring domains with a Load Balancer or connecting to a Redis Cache is now as easy as a single click, with more scenarios on the way.

  • Cloud Run customized health checks for services is now available in Preview, providing user-defined HTTP and TCP Startup probes at the container level. This capability allows Cloud Run users to define criteria as to when their application has started and is ready to start serving traffic, and is particularly useful for applications that might require additional startup time on their first initialization. 

  • To make continuous deployment easier for our customers, we added an integration between Cloud Deploy, our fully managed continuous deployment service, and Cloud Run. With this integration in place, you can do continuous deployment through Cloud Deploy directly to Cloud Run, with one-click approvals and rollbacks, enterprise security and audit, and built-in delivery metrics. Learn more on Cloud Deploy web page.

Learn more about how to build next-level web applications with Cloud Run in breakout session BLD203.

You also need confidence in the building blocks that make up your workloads from the outset. To help get you started, we introduced Software Delivery Shield, which provides a comprehensive suite of tools offering a fully managed, end-to-end solution that helps protect your software supply chain. The Software Delivery Shield launch also included the following announcements: 

  • Cloud Workstations, currently in Preview, provides fully-managed development environments built to meet the needs of security-sensitive enterprises. With Cloud Workstations, developers can access secure, fast, and customizable development environments via a browser anytime and anywhere, with consistent configurations and customizable tooling. To learn more about Cloud Workstations, please visit the web page and check out breakout session BLD100.

  • A new partnership with JetBrains provides fully managed Jetbrains IDEs as part of Cloud Workstations. This integration can give developers access to several popular IDEs with minimal management overhead for their admin teams.

Read more about Software Delivery Shield. 

Open source and open standards are an important part of building applications. With that, we are excited to announce that Google has joined the Eclipse Adoptium Working Group, a consortium of leaders in the Java community working to establish a higher quality, developer-centric standard for Java distributions. Also, we are making Eclipse Temurin available across Google Cloud products and services. Eclipse Temurin provides Java developers a higher quality developer experience and more opportunities to create integrated, enterprise-focused solutions, with the openness they deserve.

Pioneering new technology with Web3 optimized infrastructure

It's incredible to see the energy in Web3 right now and the focus on developing the broader benefits, use cases, and core capabilities of blockchain. We are helping customers with scalability, reliability, security, and data, so they can spend the bulk of their time on innovation in the Web3 space — unlocking deep user value and building the next app to attract a billion users to the ecosystem.

Companies like Near, Nansen, Solana, Blockdaemon, Dapper Labs & Sky Mavis use Google Cloud’s infrastructure. Yesterday, we announced a strategic partnership with Coinbase to better serve the growing Web3 ecosystem. 

“We could not ask for a better partner to execute our vision of building a trusted bridge into the Web3 economy and accelerating broader growth and adoption of blockchain technology. I started Coinbase with a desire to create a more accessible financial system for everyone, and Google’s history of supporting open source and decentralized ecosystems made this a natural fit. Our partnership marks a major inflection point and, together, we are removing barriers to blockchain adoption and accelerating innovation.” — Brian Armstrong, CEO, Coinbase

Lift and transform traditional workloads

Not all workloads were born in the cloud — or have completed their journey to it. Google Cloud offers a variety of programs and capabilities to help optimize traditional workloads for a new cloud era, regardless of whether you’re looking to migrate it as-is, do light optimizations, or fully modernize and transform: 

  • Migration Center - Now in Preview. For organizations looking to migrate to the cloud and transform their businesses, Migration Center can reduce the complexity, time, and cost by providing an integrated migration and modernization experience. It brings together our assessment, planning, migration, and modernization tooling in one centralized location with a shared data platform so you can proceed faster, more intelligently, and more easily through your journey. Learn more at the Migration Center webpage.

  • Google Cloud VMware Engine - Google Cloud is the first VMware partner to market with VMware Cloud Universal support, simplifying the migration of on-premises VMware VMs to VMware Engine in the cloud. With a cloud market-leading 99.99% availability SLA in a single site, VMware Engine is helping large organizations like Carrefour move to Google Cloud. 

  • Dual Run for Google Cloud - Now in Preview, this first-of-its-kind solution helps eliminate many of the common risks associated with mainframe modernizations by letting customers simultaneously run workloads on their existing mainframes and their modernized version on Google Cloud.  This means customers can perform real-time testing and quickly gather data on performance and stability with no disruption to their business. Once satisfied that the performance of the two systems is functionally equivalent, the customer can make the Google Cloud environment the system of record, and operate the existing mainframe system as needed, typically as a backup. Learn more about Dual Run in this press release or on our Mainframe Modernization webpage.

  • Google Cloud Workload Manager - Now in Preview for SAP workloads, and available in the Google Cloud console, Workload Manager provides automated analysis of your enterprise systems on Google Cloud to help continuously improve system quality, reliability, and performance. Google Cloud Workload Manager evaluates your SAP workloads by detecting deviations from documented standards and best practices to help proactively prevent issues, continuously analyze workloads, and simplify system troubleshooting. 

Learn more about how organizations like Global Payments and Loblaw Technology have migrated with ease and speed in breakout session MOD104.

Protect all your workloads 

The foundation of any workload is storage, and this year we expanded the number of supported regions with our Cloud Storage dual-region buckets (GA), so you can ensure that your workloads are protected. In the event of an outage, your application can easily access the data in the alternate region. You can add Turbo replication (GA) with your dual-region buckets. Turbo replication is backed by a 15-minute Recovery Point Objective (RPO) SLA. 

Also, we recently announced Google Cloud’s Backup and DR Service (GA). This service is a fully integrated data-protection solution for critical applications and databases that lets you centrally manage data protection and disaster recovery policies directly within the Google Cloud console, and fully protect databases and applications with a few mouse clicks. 

Learn more about Storage best practices in breakout session MOD206

Migrate, observe, and secure network traffic

We also announced many new capabilities and enhancements to the Cloud Networking portfolio that help customers migrate, modernize, secure, and observe workloads traveling to and in Google Cloud:

  • Private Service Connect helps simplify migrations by giving more control to users and integrations with 5 new partners. 

  • Network Intelligence Center can monitor the network for you with enhanced capabilities like the Performance Dashboard that will give you latency visibility between Google Cloud and the Internet. 

  • We expanded our Cloud Firewall product line and introduced two new tiers: Cloud Firewall Essentials and Cloud Firewall Standard. 

For more information on what’s new with networking, read this blog and check out breakout session MOD205.

Optimize costs

Our customers’ workloads also require technical and commercial options that can deliver the best return on their investments. Here are some new enhancements that can help:

  • Flex CUDs - GA coming soon, Flexible Committed Use Discounts help you save up to 46% off on-demand Compute Engine VM pricing, in exchange for a one- or three-year commitment. Like standard CUDs, you can apply Flex CUDs across projects within the same billing account, to VMs of different sizes, and across operating systems. Learn more.

  • Batch - Now GA, Batch is a fully managed service that helps you run batch jobs easily, reliably, and at scale. Without having to install any additional software, Batch dynamically and efficiently manages resource provisioning, scheduling, queuing, and execution, freeing you up to focus on analyzing results. There’s no charge for using Batch, and you only pay for the resources used to complete the tasks. 

Learn more about how you can optimize for cost savings with Google Cloud in MOD103

Optimize for sustainability

Any new workloads you develop should have the smallest possible carbon footprint. Google Cloud Carbon Footprint helps you measure, report, and reduce your cloud carbon emissions, and is now GA. Since introducing Carbon Footprint last year, we added features that cover Scope 1, 2 and 3 emissions, and provided role-based access to other users such as sustainability teams. Active Assist’s carbon emissions estimates related to removing unattended projects are also now GA. Learn how to build a more sustainable cloud with lower carbon emissions in MOD103, and start using Carbon Footprint today.

“At Box, we're focused on reducing our carbon footprint and we're excited for the visibility and transparency the Carbon Footprint tool will provide as we continue our work to operate sustainably.” — Corrie Conrad, VP Communities and Impact and Executive Director of Box.org at Box

"At SAP we're working to achieve net-zero by 2030, making it crucial to measure carbon emissions all along our value chain. Collaborating with Google Cloud on Carbon Footprint ensures that accurate emissions data is available in a timely manner, helping our many Solution Areas make more sustainable decisions." — Thomas Lee, SVP and Head of Multicloud Products and Services at SAP

Designed around your workloads

These are only a few examples of the many golden paths we are enabling for your workloads. We strive to be the ‘open infrastructure cloud’ that is the most intuitive to consume because everything is designed around your workloads, providing tremendous TCO benefits. 

To get even more information on all of this and more check out the Modernize breakout session track and these “What’s New” sessions available on demand at Next ’22: 

  • MOD105 to learn about new infrastructure solutions for enterprise architects and developers.

  • BLD106 to learn what’s new to help developers build, deploy, and run applications.

  • OPE100 to learn about the biggest announcements for DevOps teams, sysadmins, and operators.

Posted in