Kubernetes Networking and Load Balancer

This article describes the network model and how Kubernetes networking works. It also covers how to set up Kubernetes load balancers for application access.

If you are unfamiliar with the concepts of services and pods, start with the article “Kubernetes Basics”.

Cluster Networking Model

In a Kubernetes cluster, each pod receives its own IP address from the Pod CIDR range specified when the cluster was created. Pods can communicate with each other directly via IP addresses, regardless of which node (group of servers) they are running on.

Services are used to provide stable access to a group of pods, they provide a static address and distribute traffic among the pods.

Service Types

  • ClusterIP – accessible only within the cluster. The service receives a virtual IP address from the service address range. Used for communication between application components.
  • NodePort – opens a fixed port on each cluster node. Traffic arriving at this port is redirected to the service’s pods.
  • LoadBalancer – creates a load balancer on the platform side. Suitable for providing external access to the application.

Creating a ClusterIP Service

ClusterIP – the default service type. It provides a stable internal address for a group of pods, accessible only from within the cluster. Use it for communication between application components – for example, so that the frontend (external interface) can address the backend (server-side) by a fixed name.

apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: default
spec:
  selector:
    app: backend
  ports:
  - port: 8080
    targetPort: 8080
  type: ClusterIP

After applying the manifest, the service will be accessible within the cluster by the name backend (or the fully qualified DNS name backend.default.svc.cluster.local) and by the assigned ClusterIP:

kubectl apply -f backend-service.yaml
kubectl get svc backend

NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
backend   ClusterIP   10.96.45.12    <none>        8080/TCP   5s

Other pods in the cluster can access this service by name:

http://backend:8080

Creating a NodePort Service

A NodePort opens a fixed port (from the 30000–32767 range) on each node in the cluster. Traffic arriving at this port on any node is redirected to the service’s pods.

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: default
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort

Once deployed, the service will be available on port 30080 on each node in the cluster:

kubectl apply -f frontend-nodeport.yaml
kubectl get svc frontend

NAME       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
frontend   NodePort   10.96.78.90    <none>        80:30080/TCP   5s
Please note!
For production workloads, it is recommended to use a LoadBalancer instead of a NodePort, it provides a single entry point and traffic distribution.

Load balancer

Please note!
Load balancers are free of charge during the public beta.

When you create a LoadBalancer service, the platform automatically assigns a load balancer that distributes incoming traffic across the application’s pods.

External and internal load balancers

The load balancer type is specified using the lb.beget.com/type annotation:

external – the load balancer is assigned a public IP address accessible from the internet. Suitable for:

  • Websites and web applications
  • Public APIs
  • Any services accessed by external users

internal – the load balancer is accessible only within a private network. Suitable for:

  • Internal APIs and inter-service communication
  • Databases and other services that should not be accessible from the outside
  • Components that interact only with other services in the cloud

Creating an external load balancer

Save the manifest to the file loadbalancer-external.yaml:

apiVersion: v1
kind: Service
metadata:
  annotations:
    lb.beget.com/algorithm: round_robin
    lb.beget.com/healthcheck-interval-seconds: "60"
    lb.beget.com/healthcheck-timeout-seconds: "5"
    lb.beget.com/type: external
  name: nginx
  namespace: default
spec:
  allocateLoadBalancerNodePorts: true
  ports:
  - port: 80
    targetPort: nginx
  selector:
    app: nginx
  type: LoadBalancer

Apply the manifesto:

kubectl apply -f loadbalancer-external.yaml

Creating an internal balancer

Save the manifest to the file loadbalancer-internal.yaml:

apiVersion: v1
kind: Service
metadata:
  annotations:
    lb.beget.com/algorithm: round_robin
    lb.beget.com/healthcheck-interval-seconds: "60"
    lb.beget.com/healthcheck-timeout-seconds: "5"
    lb.beget.com/type: internal
  name: internal-api
  namespace: default
spec:
  allocateLoadBalancerNodePorts: true
  ports:
  - port: 8080
    targetPort: api
  selector:
    app: internal-api
  type: LoadBalancer

Apply the manifesto:

kubectl apply -f loadbalancer-internal.yaml

Balancer annotations

Balancer settings are configured via annotations in the metadata.annotations space of the manifest.

Load Balancer Type

  • lb.beget.com/type – values: external, internal. Load balancer type: public or private

Load Balancing Algorithm

  • lb.beget.com/algorithm – values: round_robin, least_connections. Algorithm for distributing traffic among backends

Health Checks

Healthcheck annotations determine how the load balancer checks the availability of backend services. If a service does not respond within the timeout period, the load balancer stops directing traffic to it until it is restored.

  • lb.beget.com/healthcheck-interval-seconds – a positive integer. The interval between checks (in seconds). Default: 5.
  • lb.beget.com/healthcheck-timeout-seconds – a positive integer. The timeout for waiting for a response (in seconds). Default: 30.

Checking the Load Balancer Status

After applying the manifest, verify that the load balancer has been created and assigned an address:

kubectl get svc <service-name>

Replace <service-name> with the name from the metadata.name space in your manifest. In our example, this is nginx:

Expected output for an external load balancer:

NAME    TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
nginx   LoadBalancer   10.96.162.132   155.212.142.254   80:31160/TCP   10s
  • CLUSTER-IP – the service’s internal address within the cluster
  • EXTERNAL-IP – the address at which the load balancer is accessible from the outside (for external) or within a private network (for internal)
  • PORT(S) – the service port and the assigned NodePort

If <pending> is displayed in the EXTERNAL-IP column, the load balancer is still being created. Repeat the command after a while or use the --watch flag to monitor:

kubectl get svc nginx --watch

IP address allocation usually takes no more than a minute. If the <pending> status persists for more than a few minutes, check the service events using the kubectl describe svc nginx command.

Deleting the load balancer

To delete the load balancer, delete the corresponding service:

kubectl delete svc nginx

After deleting the service, the platform-side load balancer will also be deleted, and the assigned IP address will be released.

All articles in this section

  1. Kubernetes (K8s) – An Overview of the Managed Kubernetes Service
  2. Kubernetes Basics – Key Concepts: Cluster, Nodes, Pods, Services
  3. Creating and Configuring a Cluster – Master Node Configuration, Networking, and Worker Groups
  4. Connecting to the Cluster and Working with kubectl – kubeconfig, Connectivity, and Core Kubernetes Tools
  5. Cluster management – adding nodes, changing configuration, updating, and deleting
  6. Networking and load balancers – You are here
  7. Limits, quotas, and constraints – platform constraints, what can and cannot be changed

If you have any questions, please submit a ticket via the account dashboard (under “Help and Support”). And if you’d like to discuss this article or our products, we’d love to see you in our Telegram community.