Table Of Contents
- Network Load Balancer
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
You can install aws load balancer controller by following the instructions in below repo:
- AWS LoadBalancer Controller >= v2.3.0
Kubernetes >= v1.20 or EKS >= 1.16 or the following patch releases for Service type LoadBalancer 1.18.18+ for 1.18 or 1.19.10+ for 1.19
Pods have native AWS VPC networking configured
Network Load Balancer
How To Provision
To provision a Network Load Balancer, you need to create a service of type
apiVersion: v1 kind: Service metadata: name: web-app namespace: platform labels: app: web-app spec: ports: - name: https protocol: TCP port: 443 targetPort: 443 selector: app: web-app type: LoadBalancer
Following actions are performed by the AWS Load balancer controller, when it sees a service object of type
NLB is created in AWS for the service. Depending on the annotations it’s either internal or external.
Target groups are created in either instance or ip mode depending on the annotations.
Listeners are created for each port detailed in the service definition.
Health checks are configured.
AWS Load Balancer Controller auto discovers network subnets by default.
To be able to successfully do that you need to tag your subnets as follows:
|Tag Key||Tag Value||Purpose|
|kubernetes.io/role/elb||1||Indicates that the subnet is public. Will be used if NLB is internet-facing|
|kubernetes.io/role/internal-elb||1||Indicates that the subnet is private. Will be used if NLB is internal|
Instance target mode supports pods running on AWS EC2 instances. In this mode, AWS NLB sends traffic to the instances and the kube-proxy on the individual worker nodes forward it to the pods through one or more worker nodes in the Kubernetes cluster.
IP target mode supports pods running on AWS EC2 instances and AWS Fargate. In this mode, the AWS NLB targets traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster.
Let’s look at some of the annotations that you can configure and their behaviors.
|service.beta.kubernetes.io/aws-load-balancer-type: “external”||Indicate to use the external AWS Load balancer controller instead of the in-tree controller available in kubernetes|
|service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: “instance”||Provision NLB in Instance mode|
|service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: “ip”||Provision NLB in IP mode|
|service.beta.kubernetes.io/aws-load-balancer-scheme: “internal”||Provision internal NLB|
|service.beta.kubernetes.io/aws-load-balancer-scheme: “internet-facing”||Provision internet-facing NLB|
|service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: ‘Stage=prod,App=web-app’||comma-separated list of key-value pairs which will be recorded as additional tags in the ELB|
|service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=prod-bucket,access_logs.s3.prefix=loadbalancing/web-app||Enable access logs |
Name of the Amazon S3 bucket where load balancer access logs are stored
Specify the logical hierarchy you created for your Amazon S3 bucket
|service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true||Specifies whether cross-zone load balancing is enabled for the load balancer|
Network Load Balancers do not have associated security groups.
However, you can add any CIDR rules to the security group of your worker nodes.
This is achieved by making use of
apiVersion: v1 kind: Service metadata: name: web-app namespace: platform labels: app: web-app spec: ports: - name: https protocol: TCP port: 443 targetPort: 443 selector: app: web-app type: LoadBalancer loadBalancerSourceRanges: - 10.1.0.0/16
10.1.0.0/16 is added to the worker nodes security group.
It’s critical to understand the importance of
|NLB Type||Worker Node||loadBalancerSourceRanges||Behavior|
|Internal||Private||Not set||If you didn’t set the value for loadBalancerSourceRanges, the default is 0.0.0.0/0 |
Since the worker node and NLB is private, 0.0.0.0/0 wouldn’t cause impact since it won’t be reachable directly from public internet
|Internal||Private||VPC CIDR e.g. 10.0.0.0/16 (or) ENI private IP||You can either grant access to the entire VPC CIDR or the private IP of the load balancer’s network interface to allow traffic from NLB|
|Internet-facing||Private||Not set||If you didn’t set the value for loadBalancerSourceRanges, the default is 0.0.0.0/0 |
This will open traffic for the entire internet
|Internet-facing||Private||Client CIDR||This will allow traffic only from the Client CIDR ranges since NLB by default has client IP preservation enabled|
When you create a NLB for an application it adds below rules to the security group of the worker nodes:
|Purpose||Rule||Number of Rules|
|Health Check||TCP;Node Port;Source - Subnet Range CIDR;Description - kubernetes.io/rule/nlb/health=||One per subnet CIDR|
|Client||TCP;Node Port;Source - Source Range CIDR;Description - kubernetes.io/rule/nlb/client=||One per CIDR Range|
Consider this setup - EKS having worker nodes across 3 AZ’s and you add one loadBalancerSourceRange.
This will result in 4 rules added to the worker node security group - 3 health check rules (1 per subnet) and 1 loadBalancerSourceRange rule.
So, for one NLB you end up with 4 rules.
Security groups have a default limit of 60 rules that can be added.
As you create many NLB’s/add more source ranges for your apps running in the same cluster, you will soon hit this limit and you can’t create any more NLB’s.
You can increase the quota but that multiplied by the quota for security groups per network interface cannot exceed 1,000.
Complete service file with the annotations will look as follows:
apiVersion: v1 kind: Service metadata: name: web-app namespace: platform labels: app: web-app annotations: service.beta.kubernetes.io/aws-load-balancer-type: "external" service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance" service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: 'Stage=prod,App=web-app' service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=prod-bucket,access_logs.s3.prefix=loadbalancing/web-app,load_balancing.cross_zone.enabled=true spec: ports: - name: https protocol: TCP port: 443 targetPort: 443 selector: app: web-app type: LoadBalancer loadBalancerSourceRanges: - 10.1.0.0/16
Above file results in below configuration:
Uses AWS Load Balancer Controller instead of the in-tree kubernetes controller
Provisions an internal NLB with instance mode
Sets the tags provided
Enables access logs to the provided S3 path
Enables Cross Zone load balancing
When a request is sent, it reaches the NLB which will load balance the traffic to the target backends (worker nodes).
In instance mode, the traffic gets sent to the NodePort of the instance resulting in an additional hop.
After which the kube-proxy running in the node, sends the request to the desired pod.
EKS by default runs in
iptables proxy mode, which means that the kube-proxy will make use of iptables to route traffic to the pods. iptables mode chooses a backend at random.
Another factor is
externalTrafficPolicy which is by default set to
Cluster mode. Here, the traffic may randomly be routed to a pod on another host to ensure equal distribution.