Kubernetes Ingress & TLS hosts explained


Kubernetes Ingress & TLS hosts explained

In recent years Kubernetes has soared in popularity to become one of the most renowned open-source container management systems in the world. It’s extensive feature set and comprehensive automation tools allows users to streamline their workflow by eliminating many of the manual tasks involved in deploying, scaling, and managing containerised applications.

When developing applications within a Kubernetes cluster, sometimes you may need to expose your application to traffic from the outside of the cluster. By default, external connections are only routed if they originate from within the cluster network itself.

There are a few possible methods of overcoming this roadblock, each with their own set of rules and outcomes. One of these is Ingress, which can be used in conjunction with the Kubernetes API to govern how external users access the associated services in a Kubernetes cluster.

In this guide we’re going to be explaining what Ingress is, how it works, and how TLS works with Ingress.

What is Kubernetes Ingress?

Kubernetes Ingress is essentially an assortment of routing rules which determine how users from outside the Kubernetes cluster can access applications running within the cluster. 

Ingress is an API object that uses rules defined in an Ingress Resource to determine where external HTTP and HTTPS traffic should be routed to within the cluster. In doing so, it can help eliminate the need for multiple load balancers or configuring individual nodes for external access, by routing traffic for multiple HTTP/HTTPS services through a single external IP address.

The second part that makes up Ingress along with the API object, is the Ingress controller. An Ingress Controller is essential as it is the physical implementation of the Ingress concept. In other words, the Ingress controller is necessary for routing traffic that meets rules defined by the Ingress Resource.

Production environments can benefit greatly from the use of Ingress. In such environments, you’ll typically need support for multiple protocols, authentication, and content-based routing. Ingress accommodates this by allowing you to configure and manage all of these within the cluster.

Now let’s take a look at the role of the Ingress Controller.

What is the Ingress Controller?

An Ingress Controller is a fundamental component of how Ingress works. Ingress Controllers are responsible for interpreting rules defined in the Ingress Resource, and using this information to process external traffic. Said traffic is then sent along the correct routes to the relevant application within the cluster.

In practice, an Ingress Controller is an application running in a Kubernetes cluster that configures a HTTP load balancer according to Ingress Resources. Ingress cannot function with only an Ingress Resource – it requires the Ingress Controller to physically carry out this process.. 

It can be used with different load balancers, like software load balancers or external hardware load balancers. Each load balancer requires a different implementation of the Ingress Controller, depending on the use case.

If the Ingress concept can be thought of as determining what you want to do with external traffic bound to the Kubernetes cluster, then the Ingress Controller would be the implementation of how this is handled. 

Next, we’re going to be looking at the Ingress Resource.

The Ingress Resource

So we’ve determined that the Ingress Resource is a collection of rules that define how external traffic is routed within the Kubernetes cluster. But what does one actually look like?

Here’s a minimal example of an Ingress Resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 kind: Ingress
 metadata:
   name: test-ingress
 spec:
   rules:
   - http:
       paths:
       - path: /testpath
         backend:
           serviceName: test 
           servicePort: 80

Let’s break down each aspect of the Resource line by line, to better understand what it means.

Lines 1-4: These fields are standard with any Kubernetes configuration files. apiVersion, kind, and metadata are self-explanatory.

Lines 5-7: The Ingress spec field contains information necessary to configure a proxy server or load balancer. The key function here is a list of rules that are matched against all incoming requests – and in this example, only HTTP rules are supported.

Lines 8-9: Here each HTTP rule is provided with the following information:

  • A host –  (e.g.: foo.bar.com, defaults to * in this example)
  • A list of paths – (e.g.: /testpath in this example)

Lines 10-12: For each path, a backend is defined by serviceName and service.port.name or service.port.number fields. Ingress traffic is usually sent directly to endpoints that match a backend.

How the resource is processed

As the external traffic reaches the load balancer, it’s processed by the Ingress Controller like so:

  1. If the traffic matches the supported rules, the rules are applied to the provided host. If no host is provided, then the rules apply to all inbound HTTP traffic.
  1. If the host and the path matches the content of an incoming request, then the load balancer directs the traffic to the Service that’s referenced.
  1. Finally, traffic matching all rule criteria is sent to the backend, defined by the service and port names/numbers.

Ingress with no rules will send all traffic to a single default backend, which is typically a configuration option of the Ingress Controller, and not specified in the Resource itself.

Similarly, if no none of the defined hosts or paths match the HTTP request, then the traffic is also routed to the default backend.

kubernetes-tls-secret-explained-ukhost4u

TLS in Ingress

With security being such a key concern for any business these days, it’s a good idea to encrypt any external traffic being routed to applications within the Kubernetes cluster. This can be achieved with the use of TLS Certificates in the load balancer.

The two fundamental parts required for the load balancer to complete HTTPS handshakes are the certificate, and the private key. Without both of these, external traffic can’t be encrypted.

When configuring Ingress for TLS encryption, the method used will depend on the type of Ingress Controller being implemented. With that said, the general concepts are much the same across the board.

How it works

When the load balancer accepts HTTPS connections, the traffic between the client and the server is encrypted using TLS. This traffic is then routed to applications on the backend of the cluster, determined by metadata included in the requests.

So, effectively by using just one certificate, you can secure all services in the Kubernetes cluster. However, in a production environment you may want to use a different certificate for each service. Thankfully, Ingress accommodates this.

Once traffic reaches the load balancer, it utilizes Server Name Indication (SNI) to determine which TLS certificate to provide to the client, based on the domain name in the TLS handshake. Traffic is then sent to the relevant backend, depending on the domain name defined in the request.

If the client doesn’t use SNI or uses a domain name not matching a Common Name (CN) in one of the TLS certificates, then the load balancer uses the first certificate listed in Ingress.

multiple SSL certificates with Ingress system diagram
Source: Google Kubernetes Engine documentation

In Conclusion

We hope this guide helps provide a look at how Kubernetes Ingress works, and the concept of multiple TLS hosts within the cluster.

At UKHost4u we provide a variety of plans to help you set up your Kubernetes host network easily. Take a look on our website today to see how a Kubernetes hosting plan can benefit your applications!

Leave a Reply

Your email address will not be published. Required fields are marked *