Sunday, August 24, 2025

Google Cloud Load Balancing: Global Scale, Predictable Speed



In today’s digital world, an application’s reliability is non-negotiable. Whether you run a small e-commerce shop or a massive global streaming service, your users expect seamless access and sub-second response times, every time. The key to delivering this performance, regardless of traffic spikes or infrastructure failures, lies in intelligent traffic distribution.

This is the job of Google Cloud Load Balancing (GCLB).

GCLB is a family of fully distributed, software-defined load balancers designed to handle massive scale while offering robust security and global availability. It's built on the same infrastructure that powers Google Search and YouTube, providing your applications with high performance and automatic scaling.

Key Points You Will Learn:

  • What GCLB is and its fundamental role in cloud architecture.

  • The critical differences between its various types (Global vs. Regional, HTTP(S) vs. Network).

  • A detailed comparison with AWS and Azure load balancing services.

  • How to design a highly available 2-tier application using GCLB and Python.


1. What is a Google Cloud Load Balancer?

A Google Cloud Load Balancer is a managed service that distributes user traffic across multiple instances of your application, whether they reside in Compute Engine VMs, Google Kubernetes Engine (GKE) clusters, or Google Cloud Run services.

Unlike traditional load balancers, GCLB is a single-instance, software-defined global resource. It is not hardware-based and does not require pre-warming. It automatically scales to handle sudden surges in traffic, ensuring your application remains available and responsive. It provides a single anycast IP address that directs traffic to the nearest healthy backend service globally.


2. Key Features of Google Cloud Load Balancer

GCLB offers a suite of advanced features that differentiate it from basic load-balancing solutions.

FeatureDescriptionActionable Benefit
Global Anycast IPProvides a single IP address that is advertised worldwide, routing users to the closest healthy region.Lowest Latency: Directs users to the nearest point of presence for the fastest experience.
Automatic ScalingNo need for manual scaling or capacity planning; GCLB handles traffic from zero to full-scale.Eliminates Overprovisioning: Scales instantly to meet demand without requiring reserved capacity.
Health ChecksContinuously monitors the health and responsiveness of backend instances.High Availability: Automatically routes traffic away from unhealthy or failing instances.
Integration with CDNNative integration with Google Cloud CDN (Content Delivery Network).Performance & Cost: Caches static content at the edge, reducing latency and backend load.
Layer 7 Traffic Management (URL Maps)Allows routing decisions based on request parameters like URL path, host header, or query parameters.Microservices Agility: Enables sophisticated routing, A/B testing, and blue/green deployments.
Session AffinityEnsures user traffic is consistently routed back to the same backend instance.Improved User Experience: Maintains session state for applications requiring sticky sessions.

3. Explain Architecture of Google Cloud Load Balancer and Types

GCLB is not a single product but a portfolio of services designed for different needs (Layers 4 and 7) and scopes (Global vs. Regional).

Architecture Overview: The Google Front End (GFE)

The core technology behind Global GCLB is the Google Front End (GFE) system. The GFEs are massive, globally distributed points of presence (POPs) that absorb incoming traffic and terminate connections. When a user sends a request, it hits the geographically nearest GFE via Anycast routing. The GFE then intelligently routes the traffic over Google's private, high-speed backbone network to the best-performing, healthy backend instance in the world.

Types of Google Cloud Load Balancers

Load balancers are categorized by the network layer they operate on and their scope.

Layer 7 (Application) Load Balancing

These load balancers operate at the application layer and can inspect content (HTTP/HTTPS).

  1. Global External HTTP(S) Load Balancing:

    • Scope: Global.

    • Protocol: HTTP, HTTPS.

    • Use Case: Web applications, microservices. This is the most common type, using the GFE for global traffic distribution, SSL offload, and CDN integration.

  2. Regional Internal HTTP(S) Load Balancing:

    • Scope: Regional (within a single VPC network).

    • Protocol: HTTP, HTTPS.

    • Use Case: Internal microservice communication within a large VPC, often used for centralized API gateways.

Layer 4 (Network) Load Balancing

These balancers operate at the transport layer and handle non-HTTP traffic.

  1. Global External Proxy Network Load Balancing (TCP/SSL Proxy):

    • Scope: Global.

    • Protocol: TCP, SSL.

    • Use Case: Legacy applications, secure traffic (SSL/TLS) that needs global proxy termination and intelligent routing over the Google network.

  2. Regional External Network Load Balancing (Passthrough):

    • Scope: Regional.

    • Protocol: TCP, UDP.

    • Use Case: High-performance, non-proxied (passthrough) traffic like gaming or IOT. The traffic is forwarded directly to the backend instances.

  3. Regional Internal Network Load Balancing:

    • Scope: Regional (within a VPC).

    • Protocol: TCP, UDP.

    • Use Case: Internal traffic balancing for applications like databases or legacy services that require a stable internal IP address.









4. What are the Benefits of Google Cloud Load Balancer?

The transition to GCLB offers significant advantages over traditional appliance-based or manual load balancing solutions.

  • Zero-Downtime Scaling: GCLB automatically scales its capacity horizontally, ensuring performance even during massive, unexpected spikes (like a viral event or a holiday sale). You never hit a capacity wall.

  • Reduced Latency: The Global Anycast IP feature directs users to the closest point of presence (POP) and uses Google’s high-speed private network to reach the backend, minimizing latency for users worldwide.

  • Built-in Resilience: Automatic health checks and global failover logic ensure that if an entire region or zone goes down, traffic is instantaneously re-routed to the next healthy backend, providing superior availability.

  • Cost-Efficiency: The pay-per-use model means you don't pay for idle load balancing capacity, unlike traditional hardware or pre-provisioned cloud services.

  • Security: GCLB integrates seamlessly with Google Cloud Armor (DDoS protection and WAF) and provides native SSL/TLS certificate management, offloading complex encryption tasks.


5. Compare Google Cloud Load Balancer with AWS and Azure Service

GCLB competes directly with Amazon Web Services' Elastic Load Balancing (ELB) family and Microsoft Azure's Load Balancing/Application Gateway services.

FeatureGoogle Cloud Load Balancing (GCLB)AWS Elastic Load Balancing (ELB)Azure Load Balancing / App Gateway
Global ScopeGlobal (Single Anycast IP across regions)Regional (Requires Global Accelerator or Route 53 for global distribution)Regional (Requires Azure Front Door for global L7)
Core TechnologyGoogle Front Ends (GFE) & Jupiter NetworkIndividual Load Balancer InstancesInstance-based (Load Balancer) or Edge-based (Front Door)
ScalingFully automatic and instantaneousRequires auto-scaling group integration and target capacity configurationAutomatic scaling for App Gateway; Load Balancer is managed
Layer 7 ServiceExternal HTTP(S) LBApplication Load Balancer (ALB)Application Gateway (App Gateway)
Layer 4 ServiceExternal Network LB (Passthrough/Proxy)Network Load Balancer (NLB)Load Balancer
Key AdvantageTrue global traffic distribution over a private backbone. No pre-warming required.Widest array of features and deep integration with other AWS services.Unified platform (Synapse, VMs) and strong integration for hybrid cloud.

6. What are Hard Limits on Google Cloud Load Balancer?

GCLB is a highly scalable service, and its limits are generally very high. Most "limits" are actually quotas designed to prevent abuse and can be increased by contacting Google Cloud Support.

Resource/Limit TypeDefault Limit/Note
HTTP(S) Target Proxies100 per project
Global Forwarding Rules50 per project
Backend Services per Project50 per project
Throughput & CapacityNo official hard limit. GCLB automatically scales to handle virtually unlimited traffic capacity, as long as backend instances can handle the load.
Certificates per Load Balancer15 for a target HTTPS proxy
URL Map Path RulesUp to 100 path matchers per URL map

7. Explain Top 10 Real World Use Cases Scenario on Google Cloud Load Balancer

GCLB is essential for any cloud deployment seeking high availability and performance.

  1. Global Web Applications: Distributing traffic for high-traffic public websites (e.g., e-commerce, news media) across multiple regions for global low latency and regional resilience.

  2. Microservices API Gateway: Using the HTTP(S) Load Balancer's URL maps to route different API paths (/users, /orders) to separate microservice backends (on GKE or Cloud Run).

  3. Cross-Region Disaster Recovery (DR): Configuring backends in two different regions (e.g., us-central1 and europe-west4) so that if one region fails, GCLB automatically directs all traffic to the remaining healthy region.

  4. A/B Testing and Canary Deployments: Routing a small percentage of users (e.g., 5%) to a new version of the application while keeping the majority on the stable version, managed via traffic splitting on the URL map.

  5. Hybrid Cloud Connectivity: Using the External Network Load Balancer or an HTTP(S) LB to route traffic from the internet to instances running on-premises via Hybrid Connectivity (Cloud Interconnect/VPN).

  6. SSL Termination (Offload): Terminating SSL connections at the GFE layer, reducing the computational burden on backend servers and simplifying certificate management.

  7. Gaming and IOT (Network Passthrough): Using the Regional External Network Load Balancer (UDP/TCP) for latency-sensitive, non-HTTP traffic that requires a direct, high-throughput connection to the instance.

  8. DDoS Protection: Leveraging native integration with Cloud Armor (a WAF and DDoS protection service) at the Load Balancer edge to filter malicious traffic before it reaches the backend.

  9. Static Content Caching: Integrating with Cloud CDN to cache static assets (images, JavaScript) at the GFE edge, drastically improving load times and reducing backend load.

  10. Internal Service Discovery: Using the Internal HTTP(S) Load Balancer within a VPC to provide a single, easy-to-discover internal IP for microservices, simplifying internal communication.


8. Explain in Detail Google Cloud Load Balancer Availability, Resilience, and Scalability

GCLB's core value proposition stems from its architecture that guarantees extreme levels of availability, resilience, and scalability.

Availability (High Uptime)

  • Anycast IP: The use of a single, global Anycast IP ensures that the Load Balancer itself has no single point of failure and is reachable from any point on the globe.

  • Global Failover: If all backend instances in one region become unhealthy, GCLB automatically and instantaneously redirects traffic to the next closest healthy region. This is multi-region HA built-in.

Resilience (Fault Tolerance)

  • Software Defined: Since GCLB is not an appliance, it is not susceptible to typical hardware failures. The control plane is fully distributed across Google's massive global network.

  • Health Checks: Rigorous, continuous health checks monitor not just network reachability but also the application's responsiveness. If an instance starts returning errors, it is immediately pulled from the rotation.

  • Connection Draining: When a backend instance is removed (e.g., for maintenance), GCLB gracefully stops sending new connections to it while allowing existing connections to finish, minimizing disruption.

Scalability (Handling Demand)

  • Zero-Prewarming Required: GCLB is built to handle the largest spikes instantly. It is auto-scaling by nature, meaning there is no manual intervention needed to handle traffic increases from 0 to 100% capacity.

  • Decoupled Frontend: The GFE layer absorbs massive traffic volumes at the network edge, preventing resource exhaustion on the backend Compute resources.

  • Backend Autoscaling: GCLB integrates seamlessly with Managed Instance Groups (MIGs), which automatically scale the number of backend VMs based on GCLB metrics (e.g., CPU utilization or target requests per second).


9. Explain Step-by-Step Design on Google Cloud Load Balancer for 2-Tier Web Application with Code Example in Python

A typical 2-tier web application consists of a web/application tier (Tier 1) and a database tier (Tier 2). GCLB is applied to Tier 1 for public access, providing high availability.

Design Overview

  1. Frontend (Tier 1): A Python application (e.g., Flask) runs on Compute Engine VMs within a Managed Instance Group (MIG).

  2. Load Balancer: A Global External HTTP(S) Load Balancer distributes external user traffic across the VMs in the MIG.

  3. Backend (Tier 2): A Cloud SQL instance (database).

Step-by-Step Implementation Guide (Python & GCLB)

Step 1: Python Application (Tier 1)

The Flask application must be simple, lightweight, and return a 200 OK status for the health check.

Python
# app.py (Simple Flask application for a GCLB backend)
from flask import Flask
import os

app = Flask(__name__)

# --- Health Check Endpoint ---
@app.route('/health', methods=['GET'])
def health_check():
    # The GCLB health check requires a simple 200 OK response
    return 'OK', 200

# --- Main Application Endpoint ---
@app.route('/')
def index():
    # Show which instance is serving the request for verification
    instance_name = os.environ.get('HOSTNAME', 'Unknown Host')
    return f"Hello from GCLB Backend! Serving from instance: {instance_name}", 200

if __name__ == '__main__':
    # In a production VM, run with Gunicorn or similar WSGI server
    app.run(debug=True, host='0.0.0.0', port=8080)

Step 2: Deployment and Instance Group

  1. Create an Instance Template: Package the Python application and define the necessary VM settings.

  2. Create a Managed Instance Group (MIG): Use the template to create a regional MIG in multiple zones (e.g., us-central1-a and us-central1-b) for zonal resilience.

  3. Configure Autoscaling: Configure the MIG to automatically scale based on CPU utilization (e.g., maintain 70% CPU).

Step 3: Configure the Global External HTTP(S) Load Balancer

The setup involves five key components:

  1. Backend Service: Define a backend service that points to the MIG, sets the connection parameters, and defines the Health Check (pointing to the /health path on port 8080).

  2. URL Map: Maps the incoming request URL (e.g., /) to the Backend Service.

  3. Target HTTP(S) Proxy: Terminates the HTTP/HTTPS connection.

  4. Forwarding Rule: The global public IP address that directs all traffic to the Target Proxy.

Actionable Tip: Always configure Health Checks to point to a dedicated, simple endpoint (/health). If the health check fails, GCLB will automatically redirect traffic away from the problematic VM, ensuring high availability.


10. Refer Google blog with link on Google Cloud Load Balancer

For the latest architectural deep dives, feature announcements, and best practices regarding GCLB performance and configuration, always refer to the official source.


11. Final Conclusion

Google Cloud Load Balancing is the definitive solution for delivering highly available and performant applications at Internet scale. By leveraging the Google Front End and the high-speed private network, GCLB offers sub-second global routing and zero-downtime scaling that traditional load balancing solutions cannot match. Its extensive set of features, from Layer 7 traffic management to native security and CDN integration, makes it an indispensable component for any modern, resilient cloud architecture on GCP.


13. List down 50 good Google Cloud Load Balancer knowledge practice questions with 4 options and answer with explanation

These questions are designed to test knowledge specific to GCLB's features, types, and operational concepts.

Section 1: Fundamentals and Architecture (Q1-Q15)

Q1. What is the fundamental technology that provides GCLB's global Anycast IP?

A. Cloud VPN

B. VPC Peering

C. Google Front End (GFE)

D. Managed Instance Groups

  • Answer: C. The GFE is the global system that uses Anycast routing to advertise the load balancer's IP worldwide.

Q2. What is the key characteristic of a GCLB that eliminates the need for pre-warming?

A. It is hardware-based.

B. It is software-defined and distributed.

C. It only handles TCP traffic.

D. It uses zonal forwarding rules.

  • Answer: B. Since it is fully distributed and software-defined, it can scale instantly to handle traffic surges.

Q3. Which GCLB type operates at Layer 7 and can inspect the request path?

A. Network Load Balancer (Passthrough)

B. Internal Network Load Balancer

C. External HTTP(S) Load Balancer

D. TCP Proxy Load Balancer

  • Answer: C. HTTP(S) Load Balancers operate at Layer 7 (Application) and use URL Maps for content-based routing.

Q4. A user is directed to the geographically closest healthy backend. This is primarily facilitated by which GCLB feature?

A. Health Checks

B. Session Affinity

C. Global Anycast IP

D. Connection Draining

  • Answer: C. Anycast routing ensures the user hits the nearest GFE, which then routes them over the private backbone.

Q5. Which GCLB component defines the traffic distribution logic based on the URL or Host?

A. Backend Service

B. URL Map

C. Target Proxy

D. Forwarding Rule

  • Answer: B. The URL Map is the core component for Layer 7 routing logic.

Q6. Which GCLB type is best suited for high-performance gaming traffic requiring direct, non-proxied UDP connections?

A. Global External HTTP(S) Load Balancer

B. Regional External Network Load Balancer (Passthrough)

C. Internal HTTP(S) Load Balancer

D. Global TCP Proxy Load Balancer

  • Answer: B. The Regional External Network Load Balancer provides a high-throughput, non-proxied (passthrough) path for TCP/UDP traffic.

Q7. What is the primary purpose of a Health Check in GCLB?

A. To measure latency.

B. To balance CPU utilization.

C. To determine if a backend instance is capable of accepting new connections.

D. To implement firewall rules.

  • Answer: C. Health checks ensure traffic is only sent to healthy backends.

Q8. Which GCLB component is responsible for accepting the public IP address and directing traffic to the target proxy?

A. URL Map

B. Backend Service

C. Target Proxy

D. Forwarding Rule

  • Answer: D. The Forwarding Rule defines the IP, port, and protocol and links them to the rest of the Load Balancer setup.

Q9. Which GCLB type is the only one that uses an internal (non-public) IP address?

A. External HTTP(S) Load Balancer

B. TCP Proxy Load Balancer

C. External Network Load Balancer

D. Internal Load Balancers (HTTP(S) and Network)

  • Answer: D. Internal LBs are used strictly for service communication within a single VPC network.

Q10. GCLB integrates natively with which service for application-level security and DDoS protection?

A. Cloud Firewall

B. Cloud Armor

C. Cloud Identity

D. Cloud Monitoring

  • Answer: B. Cloud Armor provides WAF and advanced network security features at the Load Balancer edge.

Q11. Where does the SSL connection terminate when using the Global External HTTPS Load Balancer?

A. On the backend VM instances.

B. At the Google Front End (GFE) layer.

C. At the VPC Firewall.

D. At the Cloud SQL instance.

  • Answer: B. SSL offload occurs at the GFE, reducing load on backend instances.

Q12. What does GCLB use to route traffic between regions?

A. The public internet.

B. Google's high-speed private backbone network.

C. A dedicated VPN tunnel.

D. A satellite link.

  • Answer: B. Traffic is routed over Google's private global network, ensuring low latency.

Q13. Which concept ensures that a client is routed to the same backend instance throughout their session?

A. Connection Draining

B. Health Check

C. Session Affinity

D. Traffic Splitting

  • Answer: C. Session affinity (or "sticky sessions") maintains the connection to a specific backend.

Q14. The HTTP(S) Load Balancer is best suited for distributing traffic to which backend types?

A. Databases

B. Raw TCP/UDP services

C. Web servers and microservices

D. File storage

  • Answer: C. HTTP(S) is built for web-facing applications and APIs.

Q15. What happens during connection draining?

A. All existing connections are instantly terminated.

B. The instance is not sent new connections but is allowed to finish existing ones.

C. The instance is immediately deleted.

D. The Load Balancer fails over to another region.

  • Answer: B. Connection draining is a graceful shutdown process used during instance updates or deletion.

Section 2: Operation, Comparison, and Scalability (Q16-Q30)

Q16. What is the primary method for scaling the application tier (Tier 1) behind a GCLB?

A. Manual resizing of the Load Balancer.

B. Using Managed Instance Groups (MIGs) with autoscaling policies.

C. Using a fixed number of dedicated instances.

D. Configuring a different Load Balancer type.

  • Answer: B. MIGs automatically adjust the number of VMs based on traffic metrics reported by the Load Balancer.

Q17. The Regional External Network Load Balancer is a passthrough load balancer. What does "passthrough" mean?

A. It only handles HTTP traffic.

B. It forwards the traffic directly to the backend instance without proxy termination.

C. It only handles internal traffic.

D. It requires SSL termination.

  • Answer: B. Passthrough means the backend VM sees the client's original IP address and handles the connection directly.

Q18. What is the primary difference in global scope between GCLB and AWS's Application Load Balancer (ALB)?

A. ALB is global, GCLB is regional.

B. GCLB is inherently global (single IP), while ALB is regional and needs external services for global reach.

C. Both are fully global with Anycast IPs.

D. ALB handles more protocols.

  • Answer: B. GCLB's global Anycast IP is a key differentiator.

Q19. Which Azure service is the functional equivalent to the Global External HTTP(S) Load Balancer?

A. Azure Load Balancer

B. Azure Front Door

C. Azure Virtual Network Gateway

D. Azure Application Gateway

  • Answer: B. Azure Front Door is the global, Layer 7 equivalent that provides edge routing.

Q20. What security service does GCLB integrate with for caching static content?

A. Cloud Storage

B. Cloud CDN

C. Cloud SQL

D. Cloud BigQuery

  • Answer: B. Cloud CDN uses the GFE layer to cache content at the edge.

Q21. When implementing a canary deployment using GCLB, which component is used to split traffic between the old and new backend versions?

A. Forwarding Rule

B. URL Map

C. Health Check

D. Target Proxy

  • Answer: B. Traffic splitting percentages are configured within the URL Map's backend service settings.

Q22. What is the default quota limit for the number of Backend Services per GCP project (approximately)?

A. 5

B. 10

C. 50

D. 1000

  • Answer: C. The default quota is typically 50.

Q23. Which protocol is NOT directly handled by a Regional Network Load Balancer (Passthrough)?

A. TCP

B. UDP

C. HTTP

D. ICMP (as a protocol for health checks)

  • Answer: C. HTTP is handled by the Layer 7 load balancers (HTTP(S)).

Q24. When using a Regional Internal HTTP(S) Load Balancer, who is the traffic source?

A. The public internet.

B. Other internal services within the same VPC network.

C. On-premises users via VPN.

D. AWS users.

  • Answer: B. Internal LBs serve traffic between services inside the private VPC network.

Q25. What is the main resilience benefit of having a Backend Service pointing to MIGs in multiple zones within a single region?

A. Protection against regional failure.

B. Protection against zonal failure.

C. Protection against Load Balancer failure.

D. Faster scaling.

  • Answer: B. Multiple zones provide high availability within the region by surviving a single zone outage.

Q26. If GCLB routes traffic over the public internet instead of the private Google backbone, what is the most likely reason?

A. The Load Balancer is misconfigured.

B. The application is running on-premises.

C. The backend is in a region not connected to the Google private network.

D. The firewall is blocking the traffic.

  • Answer: B. For a Hybrid Cloud setup, traffic between the GFE and the on-prem endpoint must use the public internet or VPN/Interconnect.

Q27. When should you choose a Network Load Balancer (Passthrough) over an HTTP(S) Load Balancer?

A. When you need URL-based routing.

B. When you need SSL offload.

C. When you need high-performance, direct TCP/UDP connections and the client's source IP is required on the backend.

D. When the application is purely internal.

  • Answer: C. The Passthrough LB maintains the original client IP and is used for non-HTTP, high-throughput use cases.

Q28. What is the primary method GCLB uses to determine traffic distribution when no session affinity is used?

A. Random routing.

B. Round-robin or least-loaded algorithm based on the chosen load balancing scheme.

C. Routing to the instance with the lowest CPU.

D. Routing to the alphabetically first instance.

  • Answer: B. GCLB uses intelligent, balanced algorithms to distribute traffic across backends.

Q29. Which component in the HTTP(S) LB setup is responsible for managing SSL certificates?

A. Forwarding Rule

B. URL Map

C. Target HTTPS Proxy

D. Backend Service

  • Answer: C. The Target HTTPS Proxy is where the SSL termination and certificate configuration happen.

Q30. What is the term for gracefully removing a backend instance while allowing existing connections to complete?

A. Backend Deletion

B. Connection Draining

C. Health Failure

D. Instance Group Shutdown

  • Answer: B. Connection draining minimizes user disruption during maintenance.

Section 3: Advanced Concepts and Practices (Q31-Q50)

Q31. How does GCLB ensure that its scalability is virtually unlimited?

A. By reserving a massive amount of hardware for each user.

B. By being a distributed system built into Google's core network fabric (GFE).

C. By using a single, monolithic control plane.

D. By requiring users to pre-warm the service.

  • Answer: B. Its software-defined nature on the global GFE network provides instantaneous scalability.

Q32. If an instance fails the health check, what happens to the existing connections to that instance?

A. They are immediately terminated.

B. They are allowed to continue while no new connections are sent.

C. They are migrated to a healthy instance.

D. The instance is immediately deleted.

  • Answer: B. GCLB typically uses connection draining when an instance fails the health check (before it is completely removed by the MIG).

Q33. What is the primary benefit of using a Regional Internal HTTP(S) Load Balancer?

A. Handling internet traffic.

B. Simplifying and centralizing internal microservice communication.

C. Providing a static public IP.

D. Securing legacy network protocols.

  • Answer: B. It provides an internal API gateway for microservices.

Q34. In the Python 2-tier design, why is the /health endpoint necessary?

A. To check the database connection.

B. To provide a simple, reliable target for the GCLB Health Check process.

C. To display application metrics.

D. To implement firewall rules.

  • Answer: B. The dedicated health check endpoint ensures the LB knows the application is responsive.

Q35. If you need to route different subdomains (e.g., api.example.com and web.example.com) to different backend services, which part of the Load Balancer configuration do you use?

A. Forwarding Rule

B. Path Rules

C. Host Rules (in the URL Map)

D. Target Proxy

  • Answer: C. Host Rules within the URL Map handle routing based on the hostname.

Q36. When migrating from a monolithic application to a microservices architecture, which GCLB feature is essential for gradual migration?

A. Session Affinity

B. Passthrough Mode

C. URL Map/Path-based routing

D. Network LB

  • Answer: C. Path-based routing allows you to peel off specific paths (/new_service) to the new microservice backend.

Q37. Which GCLB type is best for distributing internal traffic to a set of internal database replica VMs?

A. Internal HTTP(S) Load Balancer

B. Internal Network Load Balancer

C. External HTTP(S) Load Balancer

D. TCP Proxy Load Balancer

  • Answer: B. The Internal Network LB is appropriate for non-HTTP, TCP/UDP internal services like databases.

Q38. Why is pre-warming often necessary for some cloud load balancers but not GCLB?

A. Because GCLB is regional.

B. Because other load balancers might be instance-based and need time to spin up capacity.

C. Because GCLB uses HTTP.

D. Because GCLB handles less traffic.

  • Answer: B. GCLB’s GFE architecture is always ready and scales horizontally instantly.

Q39. What is the maximum number of certificates supported by a single Target HTTPS Proxy?

A. 1

B. 5

C. 15

D. 100

  • Answer: C. A single Target HTTPS Proxy supports up to 15 certificates.

Q40. If a user needs to see the original client IP address on the backend VM, which GCLB type must they use?

A. External HTTP(S) Load Balancer

B. Regional External Network Load Balancer (Passthrough)

C. TCP Proxy Load Balancer

D. Global HTTP(S) Load Balancer

  • Answer: B. Only the Passthrough Network Load Balancer preserves the original client IP.

Q41. What is the key advantage of using a Regional Internal HTTP(S) Load Balancer over a traditional Internal Network Load Balancer?

A. Lower cost.

B. It can perform URL-based routing (Layer 7).

C. It supports UDP.

D. It has a public IP.

  • Answer: B. The Layer 7 capability (URL Maps) is the key advantage for internal service routing.

Q42. GCLB integrates with MIGs using which kind of balancing mode?

A. CPU Utilization only.

B. HTTP Request Count or Utilization.

C. Memory Utilization only.

D. Network throughput.

  • Answer: B. MIGs can autoscale based on various GCLB metrics, most commonly HTTP Request Count (per second) or general backend utilization.

Q43. Which type of GCLB must be set up to enable a service endpoint in a different VPC network via VPC Peering?

A. External HTTP(S) LB

B. External Network LB

C. Internal HTTP(S) Load Balancer

D. TCP Proxy LB

  • Answer: C. Internal LBs can be accessed across VPC peering connections.

Q44. What is the maximum number of path matchers supported by a single URL Map?

A. 10

B. 100

C. 1,000

D. Unlimited

  • Answer: B. A single URL Map supports up to 100 path matchers.

Q45. Which of the following is a key resilience benefit provided by GCLB in a multi-region deployment?

A. Autoscaling of VMs.

B. Automatic cross-region failover.

C. Automatic database backup.

D. Manual traffic routing.

  • Answer: B. GCLB’s global nature allows for automatic failover between regions.

Q46. The HTTP(S) Load Balancer is considered a "proxy" load balancer because:

A. It terminates the connection at the backend.

B. It terminates the client connection at the GFE and establishes a new connection to the backend.

C. It only forwards TCP packets.

D. It requires a separate proxy service.

  • Answer: B. Proxy termination at the edge is the mechanism used for Layer 7 control and global routing.

Q47. If an application is using the External HTTP(S) LB, where should the application's SSL certificate be uploaded?

A. To the backend VM.

B. To the Cloud Storage bucket.

C. To the Target HTTPS Proxy.

D. To the Forwarding Rule.

  • Answer: C. The certificate is managed by the Target HTTPS Proxy for offload.

Q48. Which component links the public IP address to the rest of the load balancer configuration?

A. URL Map

B. Backend Service

C. Target Proxy

D. Forwarding Rule

  • Answer: D. The Forwarding Rule is the glue that makes the public IP live.

Q49. When scaling up instances, GCLB will wait for what status from the Health Check before sending traffic to a new instance?

A. Waiting

B. Healthy

C. Unhealthy

D. Draining

  • Answer: B. Traffic is only directed to instances that have reported a "Healthy" status.

Q50. Why is the TCP Proxy Load Balancer suitable for legacy applications?

A. It supports old HTTP protocols.

B. It provides global routing and security benefits for non-HTTP, proxy-based TCP/SSL traffic.

C. It is the cheapest option.

D. It uses passthrough mode.

  • Answer: B. The TCP Proxy LB globalizes and secures non-HTTP traffic without requiring the application to change.

No comments:

Post a Comment

GCP Cloud Quiz - quiz2 Question

Google cloud platform Quiz ☁️ Google cloud Platform Professional Certificati...