Two ways to setup HTTP redirects in Kubernetes cluster

When migrating from one domain name to another one may need to set up a redirect of types 301 (Moved Permanently) or 302 (Found) for a certain domain and its paths. If you have Kubernetes as your runtime, there are at least two ways to do that:

  • Deploy nginx server as an additional service to handle redirects.

  • Handle redirects at your ingress controller.

The first one is safer but a bit more complex, the second one utilizes the ingress controller of your cluster, so you get redirects "for free", but it comes with a penalty. Let's look at both of these ways in detail.

As an example, we will set up redirects from old-domain.tld to new-domain.tld for two paths:

  • /pages/about/about with 301 Moved Permanently status

  • // with 302 Found status

Additional service to handle redirects

Not to create all k8s resources from scratch one can use Bitmani's nginx helm chart.

Here is an example of values.yaml file that configures ingress to route the traffic to an old domain to the nginx container and injects a custom "server" block to implement redirect rules:

  enabled: true
  hostname: old-domain.tld
  ingressClassName: nginx
  annotations: "letsencrypt-prod"
  tls: true

serverBlock: |-
  server {
    location /pages/about {
      return 301 https://new-domain.tld/about;
    location / {
      return 302 https://new-domain.tld/;

To install the chart one should run the following commands (assuming the file above is located in the current directory and named my-values.yaml):

helm repo add bitnami
helm install redirector bitnami/nginx -f my-values.yaml

In this case, the ingress controller will proxy requests to the "redirector" instance of nginx and it will respond with appropriate redirect headers.

Ingress configuration to handle redirects

But why use an additional nginx container if you probably already have one in your cluster your ingress controller? Is it possible to inject redirect configuration directly into it? The answer is yes, but there are several issues.

The only thing you need is the ingress resource looking something like this:

kind: Ingress
  name: ingress-redirector
  annotations: |    
      location /pages/about {
        return 301 https://new-domain.tld/about;
      location / {
        return 302 https://new-domain.tld/;
  ingressClassName: nginx
    - host: ""
    - hosts:
        - ""
      secretName: old-domain-cert

As you can see, the nginx configuration is injected via annotation and follows nginx configuration syntax. Otherwise, ingress doesn't look any different from a regular one, except there is no need for backend service configuration as all the traffic will be handled by the ingress controller itself.

The "issues" I mentioned are:

  • Your ingress controller has to be based on nginx (maybe other controllers also can be configured for the task but they are not covered in this blog post).

  • Snippets configuration should be enabled on the controller level. Either with enable-snippets command-line argument or, if you use helm to deploy your ingress controller, by setting controller.enableSnippets parameter to true in your values.yaml file.

  • Third, it introduces a certain risk: if the configuration snippet has invalid syntax it will block configuration refresh for all the services using this ingress controller in the cluster. Even other ingress configuration changes won't apply. See more in "Disadvantages of Using Snippets".

To reduce the risk of invalid configuration, you can use a public helm chart I've created: Of course, it is still possible to break it with some special character in "from", "to" or "code" parameters. But at least it is harder to do it unintentionally and values.yaml file will look a bit more friendly:

    - host: old-domain.tld

    - secretName: old-domain-tld-tls
        - old-domain.tld

  - from: /pages/about
    to: https://new-domain.tld/about
    code: 301
  - from: /
    to: https://new-domain.tld/
    code: 302


What solution to choose depends on your situation. Do you have several development teams, and such implicit dependencies between their services in a shared runtime can cause an issue? Or is it just your pet project, and you really would like to avoid deploying an extra nginx container that will require resources? Kubernetes can serve your needs anyway.