Setting up Linkerd

Installing Linkerd2 into an existing Kubernetes Cluster

kubernetes linkerd service mesh cloud
2019-06-05
Thomas Kooi

One awesome tool that I got to learn a great deal more about during KubeCon EU is Linkerd 2. It’s simple to use and looks really promising. This post is about setting it up and the things I encountered during that process.

When going to the Linkerd documentation, the Getting Started page is a pretty good help at getting you up and running. This post has been done with Linkerd v2.3.1.

Before going further, you will need a functional Kubernetes cluster. I use Docker Desktop for this.

Getting the CLI tool

You will want to install the linkerd CLI tool. They provide an easy to use script for doing so;

curl https://run.linkerd.io/install | sh

Alternatively, go to their Releases and download the linkerd2-cli binary for your system. I’d recommend the stable version.

Awesome pre-requirements check

The Linkerd CLI tool has a check command to validate if you are ready to install it into your cluster;

linkerd check --pre

Essentially what it does is validate the Kube version, check if you have the required RBAC permissions in your cluster to create the required resources and see if you won’t run into any issues with Pod Security Policy.

Installing

When installing, you will probably also want to enable the auto-inject functionality. From what I understood, it’s going to be enabled by default in a later version. I haven’t ran into any troubles with it yet - it’s also opt-in when enabled, so you will need to specifically enable a namespace, deployment or statefulset to be part of the mesh.

linkerd install --proxy-auto-inject | kubectl apply -f -

Running the linkerd install command will set-up the control plane for the mesh into it’s own namespace (linkerd by default).

Once completed, you can use linkerd check to validate the installation has succedeed.

linkerd check

One of the issues I had with this is that it’s not yet possible to configure affinity or nodeSelectors for the Linkerd control plane. I deployed this into a staging cluster as well, but before doing so, manually modified the spec for all deployments to include my desired nodeSelectors.

Joining the mesh

Joining a service into the mesh is pretty easy; it just needs some annotations. You can enable it at the namespace level (kubectl annotate <namespace> linkerd.io/inject=enabled) or add an annotation to a pod spec. For example with a deployment; spec.template.metadata.annotations.linkerd.io/inject: enabled.

Note that pods need to be recreate before that it will join the mesh. So any newly created pods will start up with a side car container, the linkerd-proxy.

After running with it

So here are some things I ran into when installing linkerd into a couple staging clusters:

The right interface

Due to how linkerd-proxy works, your proces needs to bind to it’s loopback interface (for instance, 127.0.0.1 or 0.0.0.0 for all interfaces). If you only bind to the pod’s private ip, and not to the loopback, the traffic will never reach your service / process. I ended up having to tweak a couple services for this, that binded on the Pod’s IP only.

Health checks

Health checks / livenessProbes work fine with Linkerd installed, but I ended up switching this to host: 127.0.0.1 to avoid it going through the linkerd-proxy and messing up the metrics. Note this probably won’t not work or be relevant for all situations. Also - I’m not sure how Linkerd will work with mTLS and health checks when we get the option to enforce it, so by-passing it seemed like a good idea for that as well.. Though a problem for later, I suppose.

Network Policies

If you are running with Network Policies, you will want to configure some rules for this. Here are the ones I did, they could probably be a bit better:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-identity-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Egress
  egress:
  - ports:
    - port: 8080
  - to:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          linkerd.io/control-plane-component: identity
          linkerd.io/control-plane-ns: linkerd

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-prometheus-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 4191
  - from:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          linkerd.io/control-plane-component: prometheus
          linkerd.io/control-plane-ns: linkerd
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-egress-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Egress
  egress:
  - ports: []
  - to:
    - namespaceSelector:
        matchLabels:
          linkerd-namespace-label: 'true'
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-ingress-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 4143
    - port: 4190
    - port: 4191
  - from:
    - namespaceSelector:
        matchLabels:
          linkerd-namespace-label: 'true'
---
note; label the linkerd namespace with linkerd-namespace-label=true. Feel free to pick your own label name.

I’ve not yet configured a network policy for the linkerd namespace - I will share those when I got it locked down some more. If you have those and wish to share, please do poke me on Twitter.

reqps impact

Putting anything between your user and backend services will obviously have some costs. Higher CPU usage, latency, etc. I’m currently still running some performance tests in my clusters, but there is a drop noticable in the requests per second. I should probably debug that a bit more to figure out what’s going on there. Mean while, you will probably want to check out this post about a linkerd performance benchmark.

So overall, I think Linkerd 2 is really promising. The metrics and monitoring part is already really strong, as well as the simplicity of the set-up. Some time over the next few weeks, I’m going to dive into the load balancing and mTLS parts of Linkerd a bit more.


5 min read
Share this post:

Related posts