Sunday, August 24, 2025

Google Kubernetes Engine: Container Service


The power of modern software development lies in containerization, and the key to managing containers at scale is Kubernetes. But managing Kubernetes itself can be complex.

GKE is Google Cloud’s fully managed, production-ready environment for deploying, managing, and scaling containerized applications. Born from Borg, Google's own internal cluster management system, and the platform that birthed the open-source Kubernetes project, GKE offers an unparalleled level of automation, reliability, and deep integration with the rest of the Google Cloud ecosystem.

Why GKE Matters: It eliminates the heavy lifting of maintaining the Kubernetes Control Plane, allowing developers and DevOps teams to focus purely on building and iterating on code, ensuring their applications are scalable, resilient, and highly available.

Key Points You Will Master:

  • The difference between GKE’s Standard and Autopilot modes.

  • The core architecture: Control Plane versus Worker Nodes.

  • How GKE compares to AWS EKS and Azure AKS.

  • Designing a resilient 2-tier application with GKE and Python.


1. What is Google Cloud GKE?

Google Kubernetes Engine (GKE) is a managed service that runs the open-source Kubernetes container orchestration platform on Google Cloud. It provides an environment for deploying and operating your containerized applications using Google's infrastructure.

In simple terms, GKE takes the complexity out of running Kubernetes. It automates critical tasks such as:

  • Provisioning and managing worker nodes (VMs).

  • Automatic upgrades and patching of the Control Plane.

  • Implementing security policies and network configurations.

  • Auto-scaling the cluster capacity based on real-time workload demand.

GKE offers two primary modes of operation, catering to different levels of user control and management overhead:

ModeControl and ManagementPricing ModelBest For
GKE StandardUser manages the Worker Nodes (VM size, scaling, security hardening).Pay for the cluster management fee (after the free tier) and all resources on the nodes (VMs, storage) regardless of usage.Users who need maximum customization and control over the underlying infrastructure.
GKE AutopilotGoogle manages both the Control Plane and the Worker Nodes.Pay only for the resources requested by the running Pods, eliminating waste.Users who prioritize operational simplicity, cost efficiency, and hands-off cluster management.

2. Key Features of Google Cloud GKE

GKE's robust feature set is designed to optimize the developer experience while providing enterprise-grade performance and security.

  • Autopilot Mode: A paradigm shift in Kubernetes management, offering a serverless-like experience where Google manages cluster scaling, security, and maintenance automatically.

  • Nodeless Control Plane: The cluster's brain (Control Plane) is fully managed by Google, eliminating the need for you to worry about its stability or upgrades.

  • Cluster Autoscaler: Automatically scales the number of nodes in your cluster up or down based on pending Pods or underutilized nodes.

  • Vertical Pod Autoscaler (VPA): Automatically adjusts the CPU and memory requests and limits for your Pods based on past and current usage, leading to better resource utilization and cost savings.

  • GKE Sandbox: Enhanced security via strong container isolation using technologies like gVisor, which helps secure multi-tenant workloads.

  • Workload Identity: A secure and simplified way for Kubernetes service accounts to access Google Cloud services (like Cloud Storage or BigQuery) by seamlessly binding them to Google Service Accounts.


3. Explain Architecture of Google Cloud GKE

GKE follows the standard Kubernetes Master-Worker (or Control Plane-Node) architecture but distributes key components across Google Cloud’s managed infrastructure.

Control Plane (Managed by Google)

The Control Plane is the "brain" of the GKE cluster, responsible for maintaining the desired state of the cluster. In GKE, this component is fully managed by Google and is highly available by default.

Key components of the Control Plane:

  1. API Server (kube-apiserver): The frontend for the Control Plane. It exposes the Kubernetes API and handles all communications between internal and external components.

  2. Etcd: The highly available key-value store that holds the entire cluster state, configuration data, and service discovery details.

  3. Scheduler (kube-scheduler): Watches for new Pods and assigns them to a healthy Node based on resource requirements and constraints.

  4. Controller Manager: Runs various controllers that regulate the cluster state (e.g., Replication Controller ensures the correct number of replicas are running).

Worker Nodes (Managed by User in Standard, by Google in Autopilot)

The Worker Nodes are the virtual machines (Compute Engine VMs) that host the actual application containers.

Key components of the Worker Node:

  1. Kubelet: An agent running on each Node that communicates with the API server, ensuring containers described in PodSpecs are running and healthy.

  2. Container Runtime (e.g., Containerd): The software responsible for running containers (pulling images, starting, and stopping containers).

  3. Kube-Proxy: Maintains network rules on the Node, allowing network communication to and from your Pods and Services.


4. What are the Benefits of Google Cloud GKE?

Choosing GKE offers substantial benefits beyond simple container orchestration.

  • Kubernetes Pedigree: GKE is developed by the same team that created Kubernetes. This means faster access to new features, superior stability, and deeper integration with the core Kubernetes project.

  • Operational Simplicity (Autopilot): This mode radically reduces operational overhead. You no longer manage VM patching, sizing, or node pool configuration—Google handles it all.

  • Cost Efficiency: With Autopilot, billing is based on requested CPU/Memory, eliminating the cost of wasted capacity on underutilized nodes. Additionally, GKE integrates with features like Spot Pods (similar to AWS Spot Instances) for batch workloads.

  • Deep GCP Integration: Seamless connectivity with services like Cloud Load Balancing (GCLB) for global traffic, Cloud Logging and Monitoring (Operations Suite) for observability, and Cloud Storage/Filestore for persistent storage.

  • Enhanced Security: Features like Node Auto-Upgrades, Container-Optimized OS (COS), and GKE Sandbox provide a highly secure and managed base for your applications.


5. Compare Google Cloud GKE with AWS and Azure Service

GKE competes directly with AWS Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). While all three are excellent managed Kubernetes offerings, they differ in core management philosophy and features.

FeatureGoogle Cloud GKEAWS EKSAzure AKS
Managed Control Plane Fee$0.10/hour (free tier for Zonal Standard)$0.10/hourFree
Serverless ModeAutopilot (fully managed nodes and pay-per-pod)Fargate (managed compute, but regional)AKS Virtual Nodes (limited feature set)
Deepest Cloud IntegrationGCP Networking, Monitoring, AI/ML (Kubeflow)AWS IAM, Security Groups, S3Azure AD, Azure Monitor, Azure DevOps
Node Management DefaultAutomated (Autopilot)Requires Managed Node Groups setupRequires Node Pools setup
Scalability FeatureVertical Pod Autoscaler (VPA)Cluster Autoscaler (horizontal)Cluster Autoscaler (horizontal)
Core AdvantageSuperior automation, VPA, and Autopilot simplicity.Deepest platform feature set, established marketplace.Free control plane, strong integration with Microsoft ecosystem.

Key Insight: GKE is often the preferred choice for those seeking the highest degree of automation with features like VPA (which AWS and Azure do not natively offer to the same extent) and the truly hands-off Autopilot mode.


6. What are Hard Limits on Google Cloud GKE?

GKE limits are generally categorized as Quotas (adjustable) and Hard Limits (enforced by the Kubernetes Control Plane to ensure stability).

Resource/Limit TypeGKE Standard Cluster LimitGKE Autopilot Cluster LimitNote
Nodes per ClusterUp to 6,500 nodes (requires specific configs)Up to 5,000 nodesHigh limits designed for massive scale.
Pods per Cluster200,000 Pods200,000 PodsProtects control plane stability.
Pods per NodeTypically 110-256 Pods (configurable)Dynamically set (8 to 256)Based on node size and cluster version.
Etcd Database Size6 GB6 GBCritical limit enforced to prevent control plane overload.
Clusters per ProjectDefault 15, but is an adjustable quota.Default 15, but is an adjustable quota.Standard GCP quota, easily increased via support.

Actionable Tip: The most common "limit" users hit is the Pods per Node constraint, which is determined by the CIDR range allocated to the cluster. For large deployments, ensure you use a custom network configuration with a large secondary range for Pod IPs.


7. Explain Top 10 Real World Use Cases Scenario on Google Cloud GKE

GKE is the platform of choice for highly variable, complex, or data-intensive workloads.

  1. Microservices Architecture: GKE’s native support for Istio (Google Cloud Service Mesh) and URL-based routing makes it ideal for running highly distributed, decoupled applications.

  2. Continuous Integration/Continuous Delivery (CI/CD): Running CI/CD pipelines (e.g., using Jenkins, Tekton, or GitLab runners) directly on GKE for faster, isolated, and scalable build environments.

  3. High-Traffic Web Applications: Deploying consumer-facing web services (e.g., e-commerce, media streaming) that require instantaneous scaling to handle traffic spikes, leveraging GKE's Cluster Autoscaler and VPA.

  4. AI/Machine Learning Workloads: Leveraging GKE’s ability to use specialized hardware like GPUs and TPUs to run distributed training jobs and host prediction services (TensorFlow Serving/Kubeflow).

  5. IoT and Edge Computing Backends: Running resilient API services that ingest and process massive streams of data from millions of IoT devices globally.

  6. SaaS Platforms: Providing a multi-tenant environment where different customer workloads are securely isolated using GKE Namespaces and GKE Sandbox.

  7. Batch Processing/Data Analytics: Running transient, fault-tolerant data pipelines using Kubernetes Jobs and leveraging cost savings from Spot Pods in Autopilot.

  8. Lift-and-Shift of Monoliths: Re-platforming legacy applications by containerizing them and running them on GKE to immediately gain cloud scalability and management benefits before a full refactoring.

  9. Gaming Servers: Deploying game session servers that require low-latency, high-availability, and rapid scaling for concurrent players.

  10. Hybrid and Multi-Cloud Management: Using GKE Enterprise (Anthos) to manage GKE clusters deployed on-premises, on other clouds, or in the data center from a single control plane.


8. Explain in Detail Google Cloud GKE Availability, Resilience and Scalability

GKE’s core advantage is the built-in and automated resilience derived from Google’s expertise in running planet-scale services.

Availability (High Uptime)

  • Managed Control Plane HA: For Regional clusters, the Control Plane components (API Server, etcd, etc.) are replicated across multiple zones within the region. If one zone fails, the Control Plane remains operational.

  • Regional Clusters: Distributing the Worker Nodes across all zones within a region ensures that a zonal outage does not impact the application's availability.

Resilience (Fault Tolerance)

  • Node Auto-Repair: GKE proactively monitors the health of the nodes. If a node fails a health check (e.g., out of disk space, unresponsive Kubelet), GKE automatically repairs or replaces it.

  • Automatic Upgrades: GKE handles Control Plane version updates automatically, and users can enable Node Auto-Upgrades. This ensures clusters are always running the latest patched versions, minimizing security risks and manual maintenance downtime.

  • Pod Anti-Affinity: Kubernetes constructs can be used to ensure application Pods (replicas) are spread across different nodes, zones, or even regions, preventing a single failure point from taking down the entire service.

Scalability (Handling Demand)

GKE provides both Vertical and Horizontal scaling at multiple levels:

  1. Horizontal Pod Autoscaler (HPA): Increases or decreases the number of Pod replicas based on metrics like CPU utilization or custom application metrics.

  2. Vertical Pod Autoscaler (VPA): Optimizes resource usage within a Pod by automatically adjusting CPU and Memory requests and limits. This is crucial for cost optimization and efficient bin-packing.

  3. Cluster Autoscaler: The ultimate safety net. It automatically adds or removes nodes from the Node Pool when the HPA creates more Pods than the existing nodes can handle (scale-up) or when nodes become underutilized (scale-down).


9. Explain Step-by-Step Design on Google Cloud GKE for 2-Tier Web Application with Code Example in Python

A classic 2-tier application involves a frontend (Web/API) and a backend (Database). We will use GKE for the frontend and Cloud SQL for the managed backend database.

Design Overview

  • Tier 1 (Frontend): Python Flask application deployed as a Kubernetes Deployment on a GKE Autopilot cluster. Exposed via a LoadBalancer Service.

  • Tier 2 (Backend): Managed Cloud SQL instance (e.g., PostgreSQL).

  • Security: Workload Identity is used for secure, password-less connectivity between the GKE Pod and Cloud SQL.

Step 1: Python Flask Application (Frontend Tier)

The application must be containerized and designed to read database connection info from environment variables.

Python
# main.py (Simple Flask application)
from flask import Flask, jsonify
import os
# Assume a database connection function using Cloud SQL Proxy/Workload Identity
# In a real app, this connects to the database using credentials managed by WI.

app = Flask(__name__)

# Health Check required by GKE Liveness/Readiness Probes
@app.route('/healthz')
def health_check():
    # Production check would ensure DB connection is active
    return jsonify({"status": "healthy"}), 200

@app.route('/')
def index():
    db_conn_string = os.environ.get("DB_CONN_STR", "Cloud SQL Connected!")
    return f"Frontend App Running on GKE. DB Status: {db_conn_string}", 200

if __name__ == '__main__':
    # Flask runs on 8080 inside the container
    app.run(host='0.0.0.0', port=8080)

Step 2: Kubernetes Manifests (YAML)

These manifests define the deployment and exposure of the frontend tier.

YAML
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-frontend
  template:
    metadata:
      labels:
        app: web-frontend
    spec:
      # Workload Identity: Link the K8s SA to a GCP SA
      serviceAccountName: gke-sa # K8s Service Account
      containers:
      - name: web-frontend-container
        image: gcr.io/YOUR-PROJECT-ID/frontend-app:v1 # Replace with your container image
        ports:
        - containerPort: 8080
        # Resource requests are REQUIRED for Autopilot mode
        resources:
          requests:
            cpu: "250m"
            memory: "512Mi"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-frontend-service
  annotations:
    # Annotation tells GKE to use a Global External Load Balancer
    cloud.google.com/load-balancer-type: "External"
spec:
  type: LoadBalancer
  selector:
    app: web-frontend
  ports:
  - protocol: TCP
    port: 80 # External port
    targetPort: 8080 # Container port

Step 3: Deployment Steps

  1. Build and Push Image: Build the Python application into a Docker image and push it to Google Artifact Registry (or Container Registry).

    • gcloud builds submit --tag gcr.io/YOUR-PROJECT-ID/frontend-app:v1 .

  2. Create GKE Cluster: Create an Autopilot cluster for simplicity and efficiency.

    • gcloud container clusters create-auto my-2tier-cluster --region=us-central1

  3. Deploy Manifests: Apply the Kubernetes configuration files.

    • kubectl apply -f deployment.yaml

    • kubectl apply -f service.yaml


10. Refer Google blog with link on Google Cloud GKE

For the latest architectural deep dives, feature announcements, and best practices regarding GKE performance and configuration, always refer to the official source.


11. Final Conclusion

Google Kubernetes Engine (GKE) is far more than just a managed Kubernetes service—it is the fastest and most secure path to enterprise-grade container orchestration. By abstracting the Control Plane and offering the revolutionary Autopilot mode, GKE allows organizations to harness the full power of Kubernetes without the crippling operational complexity. Whether you're building a global microservices platform, running sophisticated AI models, or simply scaling a web application, GKE provides the resilience, scalability, and deep GCP integration necessary to succeed in the cloud-native era.


12. List down 50 good Google Cloud GKE knowledge practice questions with 4 options and answer with explanation

These questions are designed to test knowledge specific to GKE's features, types, architecture, and operational concepts.

Section 1: Fundamentals and Architecture (Q1-Q15)

Q1. What is the fundamental difference between GKE Standard mode and GKE Autopilot mode?

A. Standard clusters are free, Autopilot clusters are paid.

B. Standard uses VMs, Autopilot uses serverless functions.

C. Autopilot manages the worker nodes; Standard requires user management of worker nodes.

D. Autopilot only supports public endpoints.

  • Answer: C. Autopilot fully manages node provisioning, scaling, and maintenance.

Q2. Which open-source technology is GKE built upon?

A. Docker Swarm

B. Apache Mesos

C. Kubernetes

D. Cloud Foundry

  • Answer: C. GKE is Google's managed implementation of Kubernetes.

Q3. Which component of the Kubernetes Control Plane is fully managed by Google in GKE?

A. Kubelet

B. Etcd

C. Worker Nodes (in Standard mode)

D. Container Runtime

  • Answer: B. The Etcd key-value store, along with the API Server and Scheduler, is part of the managed Control Plane.

Q4. What is the primary purpose of the GKE Sandbox feature?

A. To provide a testing environment for new Kubernetes versions.

B. To isolate the Control Plane from the Worker Nodes.

C. To enhance container isolation and security using gVisor.

D. To run Windows containers.

  • Answer: C. GKE Sandbox uses gVisor to provide a stronger layer of isolation between the container and the host kernel.

Q5. When using Autopilot, what is the key resource metric you are primarily billed for?

A. VM Uptime and Cluster Management Fee.

B. The CPU and Memory requested by your running Pods.

C. Network egress only.

D. Storage volume size.

  • Answer: B. Autopilot bills based on Pod resource requests, eliminating waste from oversized nodes.

Q6. Which GKE feature automatically adjusts the number of nodes in a Node Pool?

A. Vertical Pod Autoscaler (VPA)

B. Horizontal Pod Autoscaler (HPA)

C. Cluster Autoscaler (CA)

D. GKE Sandbox

  • Answer: C. The Cluster Autoscaler modifies the size of the underlying Compute Engine instance groups (Nodes).

Q7. Which GKE object allows external internet traffic to reach your application via Google Cloud Load Balancing?

A. ClusterIP Service

B. NodePort Service

C. LoadBalancer Service

D. Headless Service

  • Answer: C. The type: LoadBalancer Service automatically provisions a GCLB and exposes the application externally.

Q8. Which file system service is typically used to provide persistent, read-write shared storage for GKE Pods?

A. Cloud Storage (GCS)

B. Cloud Spanner

C. Cloud Filestore (NFS)

D. Cloud BigQuery

  • Answer: C. Filestore provides a managed NFS, ideal for shared file storage across multiple Pods.

Q9. Which GKE object is responsible for ensuring a desired number of identical Pod replicas are running?

A. Job

B. DaemonSet

C. StatefulSet

D. Deployment

  • Answer: D. A Deployment manages the desired state and updates for stateless applications.

Q10. What is the role of the Kubelet component on a GKE Worker Node?

A. Managing the Etcd database.

B. Scheduling new Pods to nodes.

C. Ensuring containers are running in a Pod and communicating with the API server.

D. Handling external network traffic.

  • Answer: C. The Kubelet is the Node agent responsible for container health and registration.

Q11. What is the hard limit for the Etcd database size in a GKE cluster?

A. 1 GB

B. 6 GB

C. 10 GB

D. Unlimited

  • Answer: B. The 6 GB limit is enforced to protect the Control Plane’s stability and performance.

Q12. What does GKE Node Auto-Repair specifically check and fix?

A. Application Liveness Probe failures.

B. Node health issues like unresponsive Kubelet or out-of-disk space.

C. Pod resource limits.

D. Cluster autoscaler configuration.

  • Answer: B. It monitors the underlying VM/Node status.

Q13. What is the most significant operational benefit of choosing a Regional GKE cluster over a Zonal GKE cluster?

A. Lower cost.

B. Faster image pulls.

C. Resilience against a single Google Cloud Zone outage.

D. Higher Pod density.

  • Answer: C. Regional clusters distribute the Control Plane and Nodes across multiple zones for zonal resilience.

Q14. In the GKE 2-tier design, what is the Workload Identity feature used for?

A. Authenticating users to the frontend application.

B. Securely granting the Kubernetes Service Account access to GCP services like Cloud SQL.

C. Encrypting traffic between Pods.

D. Managing Docker image tags.

  • Answer: B. It maps a K8s SA to a GCP SA, providing secure access without long-lived keys.

Q15. Which GKE feature automatically suggests or sets optimized CPU and memory resource requests for your Pods?

A. Cluster Autoscaler

B. Horizontal Pod Autoscaler

C. Vertical Pod Autoscaler (VPA)

D. GKE Sandbox

  • Answer: C. VPA handles the vertical scaling/optimization of resources within a Pod.

Section 2: Comparison and Advanced Concepts (Q16-Q30)

Q16. Which of the following is a managed Kubernetes service from AWS?

A. AWS ECS

B. AWS EKS

C. AWS Fargate

D. AWS Lambda

  • Answer: B. EKS (Elastic Kubernetes Service) is AWS's direct competitor to GKE.

Q17. Which of the following is a managed Kubernetes service from Azure?

A. Azure Functions

B. Azure Container Instances (ACI)

C. Azure AKS

D. Azure VM Scale Sets

  • Answer: C. AKS (Azure Kubernetes Service) is Azure's direct competitor to GKE.

**Q18. A key differentiation of GKE compared to AWS EKS and Azure AKS is its native support for: **

A. Windows containers.

B. YAML manifests.

C. Vertical Pod Autoscaler (VPA).

D. Container runtime interface.

  • Answer: C. VPA is a powerful native feature that is a key differentiator for GKE’s resource optimization.

Q19. Which pricing advantage does GKE Autopilot offer over GKE Standard?

A. No charge for Control Plane.

B. No charge for unutilized resources on the node.

C. Free network egress.

D. Free storage.

  • Answer: B. Since Google manages the node sizing, you only pay for the resources your running Pods actually request.

Q20. When running CI/CD pipelines as GKE Jobs, which feature is critical for running them cost-effectively?

A. Regional clusters.

B. Node Auto-Repair.

C. Spot Pods/Preemptible VMs.

D. Workload Identity.

  • Answer: C. Spot Pods (or Preemptible VMs) offer massive discounts for fault-tolerant batch workloads.

Q21. To route traffic based on the application's URL path (e.g., /api to one backend, /web to another), GKE uses which underlying GCP networking service?

A. Cloud Interconnect

B. Cloud Load Balancing (via Ingress)

C. Cloud DNS

D. VPC Peering

  • Answer: B. Kubernetes Ingress objects provision Google Cloud Load Balancers for Layer 7 routing.

Q22. What happens if a Deployment manifest does not specify resource requests (CPU/Memory) in GKE Autopilot mode?

A. The Pod will be scheduled with default requests.

B. The API server will reject the Pod creation request.

C. The Pod will use unlimited resources.

D. The Cluster Autoscaler will fail.

  • Answer: B. Resource requests are mandatory in Autopilot mode for Google to accurately manage and bill for the resources.

Q23. Which type of Pod will a DaemonSet deploy?

A. A single, one-time task.

B. A stateful application.

C. A single Pod on every node in the cluster.

D. A set of identical, stateless replicas.

  • Answer: C. DaemonSets are used for cluster-level services like monitoring agents or log collectors.

Q24. In the GKE architecture, the communication between the Kubelet and the API Server is secured by:

A. Basic username/password.

B. API keys.

C. TLS encryption.

D. Cloud Firewall rules only.

  • Answer: C. All core Kubernetes component communication is secured via TLS/PKI.

Q25. What is the default node operating system used by GKE for its Worker Nodes?

A. Ubuntu

B. Windows Server

C. Container-Optimized OS (COS)

D. CentOS

  • Answer: C. COS is a hardened, minimal Linux OS designed specifically for containers.

Q26. What GKE feature would you use to prevent Pod replicas from running on the same underlying physical machine or zone?

A. Node Selectors

B. Pod Anti-Affinity

C. Liveness Probes

D. Node Pools

  • Answer: B. Anti-Affinity rules specify constraints to spread Pods across failure domains.

Q27. How does GKE integrate with Cloud Monitoring?

A. Requires installing a third-party agent.

B. Only via manual export.

C. Metrics and logs are automatically collected and integrated.

D. Only works if GKE Standard is used.

  • Answer: C. GKE has deep, native integration with Google Cloud Operations Suite (formerly Stackdriver).

Q28. The Kubernetes Liveness Probe in the 2-tier design example is used to:

A. Check if the Pod is ready to receive traffic.

B. Determine if a container needs to be restarted.

C. Load balance traffic between Pods.

D. Check the database connection.

  • Answer: B. The Liveness Probe tells the Kubelet when to restart a failing container.

Q29. What is the primary benefit of using a Regional cluster regarding the Control Plane?

A. It provides a free Control Plane.

B. It makes the cluster run faster.

C. It ensures the Control Plane itself is highly available across multiple zones.

D. It allows for unlimited scaling.

  • Answer: C. The Control Plane is replicated for HA.

Q30. If your Pods frequently crash due to resource exhaustion, which GKE feature is designed to mitigate this?

A. Horizontal Pod Autoscaler.

B. Vertical Pod Autoscaler (VPA).

C. Node Auto-Repair.

D. LoadBalancer Service.

  • Answer: B. VPA adjusts the internal Pod resource requests, fixing resource starvation issues.

Section 3: Use Cases and Operations (Q31-Q50)

Q31. Which resource type would you use to run a one-time data processing script that must complete successfully?

A. Deployment

B. StatefulSet

C. Job

D. DaemonSet

  • Answer: C. A Job is designed for one-off or batch tasks and ensures successful completion.

Q32. For a globally deployed application, GKE can use which load balancing feature to provide low-latency routing?

A. Regional Network Load Balancer.

B. Global External HTTP(S) Load Balancer.

C. Internal Load Balancer.

D. Cloud VPN.

  • Answer: B. The Global External HTTP(S) Load Balancer provides a single Anycast IP for global access.

Q33. When is it most appropriate to use GKE Standard mode over Autopilot?

A. For small, simple web apps.

B. When needing to use specific, custom VM machine types or operating systems (e.g., Windows Nodes).

C. When cost efficiency is the highest priority.

D. When using Spot Pods.

  • Answer: B. Standard mode grants full control over the worker nodes and their configurations.

Q34. When deploying a StatefulSet, which underlying GCP service typically provides the persistent storage volume (Persistent Volume Claim)?

A. Cloud Storage

B. Cloud BigQuery

C. Compute Engine Persistent Disk (or Filestore)

D. Cloud Pub/Sub

  • Answer: C. Persistent Disks are block storage suitable for stateful workloads.

Q35. What is the maximum number of nodes supported in a GKE Standard cluster (though requiring advanced configuration)?

A. 1,000 nodes

B. 5,000 nodes

C. 6,500 nodes

D. 100,000 nodes

  • Answer: C. The theoretical max is very high (6,500) but requires specific network configuration.

Q36. Which Kubernetes resource is used to expose an application internally within the GKE cluster?

A. LoadBalancer Service

B. ClusterIP Service

C. ExternalName Service

D. Ingress

  • Answer: B. ClusterIP provides a stable internal IP address accessible only within the cluster's VPC.

Q37. Which security measure ensures that the GKE Worker Nodes are running a minimized, hardened operating system?

A. GKE Sandbox

B. Workload Identity

C. Container-Optimized OS (COS)

D. Cloud Armor

  • Answer: C. COS is Google's custom OS for GKE.

Q38. For running AI/ML models, which specialized hardware can GKE Worker Nodes integrate with?

A. FPGAs

B. ASICs

C. GPUs and TPUs

D. Quantum Processors

  • Answer: C. GKE supports acceleration using both GPUs and TPUs.

Q39. What is the process of updating the Control Plane to a newer Kubernetes version in GKE called?

A. Node Scaling

B. Rollback

C. Cluster Upgrade

D. Pod Migration

  • Answer: C. GKE automates this complex process.

Q40. When configuring autoscaling for an application deployed via GKE, which two Kubernetes components work in tandem?

A. Cluster Autoscaler and Node Pool.

B. Horizontal Pod Autoscaler and Deployment.

C. Vertical Pod Autoscaler and DaemonSet.

D. LoadBalancer Service and ClusterIP.

  • Answer: B. HPA watches metrics and scales the number of replicas in the Deployment.

Q41. The initial concept for Kubernetes originated from Google's internal cluster management system known as:

A. Omega

B. Colossus

C. Borg

D. Spanner

  • Answer: C. Borg was the precursor to Kubernetes.

Q42. In the GKE 2-tier design, why is Cloud SQL used instead of running a database inside a Pod on GKE?

A. Cloud SQL is cheaper.

B. GKE cannot run databases.

C. Cloud SQL is a fully managed, stateful service better suited for production databases (Tier 2).

D. It's a security requirement.

  • Answer: C. Best practice dictates using managed services like Cloud SQL for stateful Tier 2 components.

Q43. What is the command used to interact with the Kubernetes API after setting up a GKE cluster?

A. gcloud

B. docker

C. kubectl

D. gke-admin

  • Answer: C. kubectl is the primary command-line tool for managing Kubernetes.

Q44. GKE is a strong choice for multi-cloud deployments because of its integration with which Google Cloud platform?

A. Cloud Run

B. Cloud Functions

C. Anthos (GKE Enterprise)

D. BigQuery

  • Answer: C. Anthos provides a unified management plane for GKE clusters across clouds and on-premises.

Q45. For a GKE Pod to communicate with an external non-GCP service, which networking component is involved in establishing the connection?

A. LoadBalancer

B. Kube-Proxy

C. Etcd

D. HPA

  • Answer: B. Kube-Proxy maintains the necessary network rules on the node.

Q46. Which security feature allows GKE to easily manage third-party access and compliance policies?

A. Node Auto-Upgrades.

B. Integration with Cloud Armor (via LoadBalancer).

C. Workload Identity.

D. Pod Security Policies.

  • Answer: B. Cloud Armor provides WAF and security policy enforcement at the edge.

Q47. If you specify cpu: 250m in a Pod's resource request, how much CPU time are you requesting?

A. 2.5 CPU Cores

B. 25 CPU Cores

C. 0.25 CPU Core (250 millicores)

D. 250 CPU Cores

  • Answer: C. m stands for millicores, or thousandths of a CPU core.

Q48. Which factor most heavily dictates the number of Pods per node in a GKE cluster?

A. The number of Deployments.

B. The Node's memory size.

C. The size of the Pod's secondary IP address range (CIDR).

D. The Cluster Autoscaler configuration.

  • Answer: C. The available IP addresses in the Pod CIDR block often limits Pod density.

Q49. When scaling down an Autopilot cluster, what does GKE prioritize for termination?

A. The oldest Pods.

B. The least utilized nodes.

C. The newest nodes.

D. Nodes running system Pods.

  • Answer: B. Autopilot constantly optimizes for cost, removing underutilized nodes.

Q50. Which GKE mode is the recommended choice for most new deployments by Google?

A. GKE Standard (Zonal)

B. GKE Standard (Regional)

C. GKE Autopilot

D. GKE Alpha

  • Answer: C. Autopilot is recommended due to its streamlined operations, security, and built-in cost optimization.

No comments:

Post a Comment

GCP Cloud Quiz - quiz2 Question

Google cloud platform Quiz ☁️ Google cloud Platform Professional Certificati...