The latest news from around the cloud: Club Cloud Stories #4 is here! Luca Cavallin & Jacco Kulman – joined by special guest Antoni Tzavelas (Google Cloud Course Creator and DevOps enthusiast) – are going to discuss Cloud Run and GKE Autopilot!
Luca Cavallin and Matt Watson gave a talk in September about Cloud Run and GKE Autopilot, two new services on Google Cloud to more easily run containers. In this episode of Club Cloud Stories, we talk about the highlights of these products.
Cloud Run lets you develop and deploy highly scalable containerized applications on a fully managed serverless platform. With it:
- You can use any language!
- You can reap the benefits of containers: they require less system resources than VMs, they are easy to scale, and applications run the same regardless of where they are deployed
- You can reap the benefits of serverless too: fully managed, autoscaling and autohealing made easy, scales to zero.
Cloud Run is a perfect choice for applications such as web services, (light) data processing systems, scheduled tasks (i.e. document generation). Furthermore, it supports Google’s Global Load Balancer, HTTP/2, gRPC, websockets and Eventarc. Cloud Run is also a secure solution sandboxed using gVisor, it provides integrated logging & monitoring and supports Secret Manager, Binary Authorization and user-managed encryption keys.
Cloud Run can also be a good way to save on costs! With it, you pay only when handling requests, and committed use discounts are available.
With GKE Autopilot, Google manages the cluster’s underlying infrastructure: so you can forget about configuration management on GKE!
GKE Autopilot provides a fully-monitored hands-off experience, with automatic node scaling and built-in redundancy.
It is especially useful for lift-and-shift migrations, web applications, scheduled and batch applications (i.e. CI/CD pipelines) and it enforces security by means
of container isolation, forbidding privileged pods and hardened underlying nodes with no SSH access.
GKE pricing is based on a per-pod-resource-requests model, plus a flat cluster fee (similar to "regular" GKE).