Author: David Dymko
Last Updated: Fri, Sep 23, 2022By default, Kubernetes requires you to manage some portions of your application by hand, such as DNS records and TLS certificates. But wouldn't it be better to define your desired DNS and TLS alongside the application manifests?
The good news is that you can use two open-source Kubernetes plugins to automate the process. This guide explains how to install and configure ExternalDNS for DNS management in your manifests and cert-manager to handle certificate management.
To follow this guide, you'll need:
You should also know how to:
Log in to the registrar where you purchased your domain and set the nameserver (NS) records to Vultr's name servers: * ns1.vultr.com * ns2.vultr.com
Use vultr-cli
to create a new DNS zone at Vultr.
$ vultr-cli dns domain create -d example.com
DOMAIN DATE CREATED DNS SEC
example.com 2022-03-19T19:12:00+00:00 disabled
Verify the zone with vultr-cli
.
$ vultr-cli dns record list example.com
ID TYPE NAME DATA PRIORITY TTL
87be33b9-24fb-4502-9559-7eace63da9f7 NS ns1.vultr.com -1 300
de8edb75-7061-4c50-be79-4b67535aeb92 NS ns2.vultr.com -1 300
Cert-manager, at a high level, is a custom Kubernetes resource that allows for certificate management natively within Kubernetes.
Cert-manager can:
Vultr offers custom cert-manager webhooks so users can issue certificates as YAML manifests. In the specific use case for this guide, you'll use the Vultr cert-manager-webhook plugin, which handles TLS certificates for your domains.
In this step, you'll install the base cert-manager
and the Vultr-specific cert-manager-webhook
.
First, install the base cert-manager with kubectl apply
as described in the cert-manager documentation.
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml
After applying the YAML, uou can inspect the related resources in the cert-manager
namespace.
Next, create a secret
that contains your Vultr API key. Cert-manager uses this secret to create the DNS entries required for domain validation.
$ kubectl create secret generic "vultr-credentials" --from-literal=apiKey=<VULTR API KEY> --namespace=cert-manager
Now install the Vultr-specific cert-manager-webhook with Helm.
$ helm install --namespace cert-manager cert-manager-webhook-vultr ./deploy/cert-manager-webhook-vultr
Verify that the Vultr webhook is running by inspecting the cert-manager
namespace.
To issue certificates, you must create YAML definitions for a ClusterIssuer and grant permissions to the service account.
A ClusterIssuer represents the Certificate Authority (CA) used to create the signed certificates.
This example uses the LetsEncrypt staging environment. For production, use
https://acme-v02.api.letsencrypt.org/directory
.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: {YOUR EMAIL ADDRESS}
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
solvers:
- dns01:
webhook:
groupName: acme.vultr.com
solverName: vultr
config:
apiKeySecretRef:
key: apiKey
name: vultr-credentials
You must grant permissions to the service account for it to grab the secret. Deploy Role-based access control (RBAC), like this:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cert-manager-webhook-vultr:secret-reader
namespace: cert-manager
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["vultr-credentials"]
verbs: ["get", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cert-manager-webhook-vultr:secret-reader
namespace: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cert-manager-webhook-vultr:secret-reader
subjects:
- apiGroup:""
kind: ServiceAccount
name: cert-manager-webhook-vultr
Apply the YAML. After you deploy the ClusterIssuer and RBAC, you can request TLS certificates from LetsEncrypt for domains hosted on Vultr.
The Certificate resource is the human-readable certificate request definition that is honored by an issuer and kept up-to-date. Here's an example:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: staging-cert-example-com
spec:
commonName: example.com # REPLACE THIS WITH YOUR DOMAIN
dnsNames:
- '*.example.com' # REPLACE THIS WITH YOUR DOMAIN
- example.com # REPLACE THIS WITH YOUR DOMAIN
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
secretName: example-com-staging-tls # Replace this to have your domain
Here's a description of the key fields:
Kubernetes create a few more resources after creating the Certificate
kind. They are:
order
for a signed TLS certificate.challenge
, which must be completed for an authorization
for a single DNS name.Here is an example kubectl
output:
$ kubectl get certificates
NAME READY SECRET AGE
staging-cert-example-com False example-com-staging-tls 47s
$ kubectl get certificateRequests
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
staging-cert-example-com-qvjvj True False letsencrypt-staging system:serviceaccount:cert-manager:cert-manager 55s
$ kubectl get orders
NAME STATE AGE
staging-cert-example-com-qvjvj-3598131141 pending 59s
$ kubectl get challenges
NAME STATE DOMAIN AGE
staging-cert-example-com-qvjvj-3598131141-1598866100 pending example.com 61s
See the Concepts section of the cert-manager documentation to learn more about these resources.
The validation of the certificate takes a few minutes. You can check the status with Kubectl:
$ kubectl get certificates
NAME READY SECRET AGE
staging-cert-example-com True example-com-staging-tls 5m11s
Check the Ready
state of the certificate. When it returns True
, your valid TLS certificate from LetsEncrypt is stored in the secret name you defined in the certificate YAML.
$ kubectl get secrets | grep "example-com-staging-tls"
example-com-staging-tls kubernetes.io/tls 2 33m
Remember that this example ClusterIssuer points to the LetsEncrypt staging environment. For production, please use https://acme-v02.api.letsencrypt.org/directory
.
You have automated TLS for your domain on Kubernetes!
Next, you'll set up ExternalDNS to automatically create DNS entries and relate them to the service IP addresses for the ingress
or loadbalancer
services in Kubernetes.
The ExternalDNS installation at Vultr is straightforward. You can install ExternalDNS with the following YAML manifest.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
args:
- --source=ingress #service is also possible
- --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.
- --provider=vultr
- --registry=txt
- --txt-owner-id=your-user-id
env:
- name: VULTR_API_KEY
value: "{API KEY}" # Enter your Vultr API Key
Here's a description of the key fields in the Deployment Spec args section:
ingress
, or pair ExternalDNS with a regular loadbalancer
service.txt
to create a TXT record that accompanies each record created by external-dns.Apply the external-dns YAML and verify it's running correctly by inspecting the pod.
$ kubectl get pods | grep "external-dns"
external-dns-8cb7f649f-bg8m5 1/1 Running 0 10m
After ExternalDNS is running, you can add annotations to your service manifests for DNS entries.
External-dns and Cert-manager ensure you always have valid TLS certificates. To test this, deploy a simple application and expose it to the public internet on HTTPS. For example, this is a single replica deployment of Nginx and a clusterIP service that routes to the deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose the Nginx deployment to the internet, use ingress-nginx
, or a loadbalancer
service type instead of an ingress. If you use loadbalancer
, change the type from ingress
to service
in your external DNS YAML.
To use Kubernetes Nginx ingress, apply the prepared manifests from the ingress controller quick start guide.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
This creates a new namespace, ingress-nginx
, for the ingress resources. Here's an example ingress entry that exposes and Nginx app.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
# use the shared ingress-nginx
external-dns.alpha.kubernetes.io/hostname: www.example.com
spec:
tls:
- hosts:
- example.com
secretName: example-com-prod-tls
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
external-dns.alpha.kubernetes.io/hostname: www.example.com
annotation defines what entry ExternalDNS should create. In this case, it creates an A record for www that points to the load balancer deployed by the ingress.tls.hosts
section defines which domain the ingress should treat as HTTPS.secretName
has the issued TLS certificates.rules.host
section defines what URL should route to which service. In this example, www.example.com/
should go to this Nginx service deployment.After you deploy the ingress, inspect it with Kubectl.
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx nginx www.example.com 192.0.2.123 80, 443 13h
Kubernetes and the DNS system will take a few minutes to propagate the requests and domain records, and then you should have a domain backed with HTTPS.
To recap what you've accomplished:
With these three tools at your disposal, you can define your application's entire state in YAML manifests and let Kubernetes handle the rest.
For more information, see these useful resources: