How to add an arbitrary records to Kube-DNS

Everything started the other day when I wanted to overwrite some DNS records inside my Kubernetes cluster, but I could not find a straightforward way to make that happen, or at least at the first glance. As they say, The best way to learn something is getting your hands dirty with it. So I took the matters into my own hands and decided to dig deeper and I found some really interesting stuff and I decided to write about it.

I found 2 possible solutions for this problem:

  1. Pod-wise (Adding the new records to every pod that needs to resolve these domains)
  2. cluster-wise (Adding the changes to a central place which all pods have access to, Which in our case is the Kube-DNS)

Let’s begin with the pod-wise solution:

As of Kubernetes 1.7, It’s possible now to add entries to a Pod’s /etc/hosts directly using .spec.hostAliases

For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

Easy enough, The problem with this approach is that you’ll have to add the hostAliases to all the resources that’ll need access to the custom entries and that’s not ideal at all.

The Cluster-wise solution:

DNS-based service discovery has been part of Kubernetes for a long time with the Kube-DNS cluster addon. This has generally worked pretty well, but there have been some concerns around the reliability, flexibility, and security of the implementation. As of Kubernetes v1.11, CoreDNS is the recommended DNS Server, replacing Kube-dns. If your cluster originally used Kube-dns, you may still have Kube-dns deployed rather than CoreDNS. I’m going to assume that you’re using CoreDNS as your K8S DNS.

In CoreDNS it’s possible to add arbitrary entries to the cluster DNS and that way all pods will resolve these entries directly from the DNS without the need to change each and every /etc/hosts file in every pod.

First:

Let’s change the coreos ConfigMap and add the required changes:

$ kubectl edit cm coredns -n kube-system

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts /etc/coredns/customdomains.db example.org {
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }
  customdomains.db: |
    10.10.1.1 mongo-en-1.example.org
    10.10.1.2 mongo-en-2.example.org
    10.10.1.3 mongo-en-3.example.org
    10.10.1.4 mongo-en-4.example.org

Basically, we added two things:

  1. The hosts plugin before the Kubernetes plugin and used the fallthrough option of the hosts plugin to satisfy our case.

    To shed some more light on the fallthrough option. Any given backend is usually the final word for its zone - it either returns a result, or it returns NXDOMAIN for the query. However, occasionally this is not the desired behavior, so some plugins support a fallthrough option. When fallthrough is enabled, instead of returning NXDOMAIN when a record is not found, the plugin will pass the request down the chain. A backend further down the chain then has the opportunity to handle the request and in our case, That backend is Kubernetes.

  2. We added a new file to the ConfigMap (customdomains.db) and added our custom domains (mongo-en-*.example.org) in there.

The last thing is to Remember to add the customdomains.db file to the config-volume for the CoreDNS pod template:

$ kubectl edit -n kube-system deployment coredns

Then:

volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
            - key: customdomains.db
              path: customdomains.db

and finally to make kubernetes reload CoreDNS (each pod running):

$ kubectl rollout restart -n kube-system deployment/coredns

This post is a small improvement over my asnwer on stackoverflow for the same question.