This article describes the network model and how Kubernetes networking works. It also covers how to set up Kubernetes load balancers for application access.
If you are unfamiliar with the concepts of services and pods, start with the article “Kubernetes Basics”.
Cluster Networking Model
In a Kubernetes cluster, each pod receives its own IP address from the Pod CIDR range specified when the cluster was created. Pods can communicate with each other directly via IP addresses, regardless of which node (group of servers) they are running on.
Services are used to provide stable access to a group of pods, they provide a static address and distribute traffic among the pods.
Service Types
- ClusterIP – accessible only within the cluster. The service receives a virtual IP address from the service address range. Used for communication between application components.
- NodePort – opens a fixed port on each cluster node. Traffic arriving at this port is redirected to the service’s pods.
- LoadBalancer – creates a load balancer on the platform side. Suitable for providing external access to the application.
Creating a ClusterIP Service
ClusterIP – the default service type. It provides a stable internal address for a group of pods, accessible only from within the cluster. Use it for communication between application components – for example, so that the frontend (external interface) can address the backend (server-side) by a fixed name.
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: default
spec:
selector:
app: backend
ports:
- port: 8080
targetPort: 8080
type: ClusterIPAfter applying the manifest, the service will be accessible within the cluster by the name backend (or the fully qualified DNS name backend.default.svc.cluster.local) and by the assigned ClusterIP:
kubectl apply -f backend-service.yaml
kubectl get svc backend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.96.45.12 <none> 8080/TCP 5sOther pods in the cluster can access this service by name:
http://backend:8080Creating a NodePort Service
A NodePort opens a fixed port (from the 30000–32767 range) on each node in the cluster. Traffic arriving at this port on any node is redirected to the service’s pods.
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: default
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80
nodePort: 30080
type: NodePortOnce deployed, the service will be available on port 30080 on each node in the cluster:
kubectl apply -f frontend-nodeport.yaml
kubectl get svc frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.96.78.90 <none> 80:30080/TCP 5sLoad balancer
When you create a LoadBalancer service, the platform automatically assigns a load balancer that distributes incoming traffic across the application’s pods.
External and internal load balancers
The load balancer type is specified using the lb.beget.com/type annotation:
external – the load balancer is assigned a public IP address accessible from the internet. Suitable for:
- Websites and web applications
- Public APIs
- Any services accessed by external users
internal – the load balancer is accessible only within a private network. Suitable for:
- Internal APIs and inter-service communication
- Databases and other services that should not be accessible from the outside
- Components that interact only with other services in the cloud
Creating an external load balancer
Save the manifest to the file loadbalancer-external.yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
lb.beget.com/algorithm: round_robin
lb.beget.com/healthcheck-interval-seconds: "60"
lb.beget.com/healthcheck-timeout-seconds: "5"
lb.beget.com/type: external
name: nginx
namespace: default
spec:
allocateLoadBalancerNodePorts: true
ports:
- port: 80
targetPort: nginx
selector:
app: nginx
type: LoadBalancerApply the manifesto:
kubectl apply -f loadbalancer-external.yamlCreating an internal balancer
Save the manifest to the file loadbalancer-internal.yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
lb.beget.com/algorithm: round_robin
lb.beget.com/healthcheck-interval-seconds: "60"
lb.beget.com/healthcheck-timeout-seconds: "5"
lb.beget.com/type: internal
name: internal-api
namespace: default
spec:
allocateLoadBalancerNodePorts: true
ports:
- port: 8080
targetPort: api
selector:
app: internal-api
type: LoadBalancerApply the manifesto:
kubectl apply -f loadbalancer-internal.yamlBalancer annotations
Balancer settings are configured via annotations in the metadata.annotations space of the manifest.
Load Balancer Type
lb.beget.com/type– values:external,internal. Load balancer type: public or private
Load Balancing Algorithm
lb.beget.com/algorithm– values:round_robin,least_connections. Algorithm for distributing traffic among backends
Health Checks
Healthcheck annotations determine how the load balancer checks the availability of backend services. If a service does not respond within the timeout period, the load balancer stops directing traffic to it until it is restored.
lb.beget.com/healthcheck-interval-seconds– a positive integer. The interval between checks (in seconds). Default: 5.lb.beget.com/healthcheck-timeout-seconds– a positive integer. The timeout for waiting for a response (in seconds). Default: 30.
Checking the Load Balancer Status
After applying the manifest, verify that the load balancer has been created and assigned an address:
kubectl get svc <service-name>Replace <service-name> with the name from the metadata.name space in your manifest. In our example, this is nginx:
Expected output for an external load balancer:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.96.162.132 155.212.142.254 80:31160/TCP 10s- CLUSTER-IP – the service’s internal address within the cluster
- EXTERNAL-IP – the address at which the load balancer is accessible from the outside (for external) or within a private network (for internal)
- PORT(S) – the service port and the assigned NodePort
If <pending> is displayed in the EXTERNAL-IP column, the load balancer is still being created. Repeat the command after a while or use the --watch flag to monitor:
kubectl get svc nginx --watchIP address allocation usually takes no more than a minute. If the <pending> status persists for more than a few minutes, check the service events using the kubectl describe svc nginx command.
Deleting the load balancer
To delete the load balancer, delete the corresponding service:
kubectl delete svc nginxAfter deleting the service, the platform-side load balancer will also be deleted, and the assigned IP address will be released.
All articles in this section
- Kubernetes (K8s) – An Overview of the Managed Kubernetes Service
- Kubernetes Basics – Key Concepts: Cluster, Nodes, Pods, Services
- Creating and Configuring a Cluster – Master Node Configuration, Networking, and Worker Groups
- Connecting to the Cluster and Working with kubectl – kubeconfig, Connectivity, and Core Kubernetes Tools
- Cluster management – adding nodes, changing configuration, updating, and deleting
- Networking and load balancers – You are here
- Limits, quotas, and constraints – platform constraints, what can and cannot be changed
If you have any questions, please submit a ticket via the account dashboard (under “Help and Support”). And if you’d like to discuss this article or our products, we’d love to see you in our Telegram community.