Skip to main content

Governance & Compliance

In certain industries it is very important to adhere to certain regulatory requirements. Part of those requirements is the ability to define policies on different levels of IT operation both for technical and organizational processes. UKS includes several technologies, partly as base Kubernetes services and partly as additional cluster extensions to enable organizations to define and enforce those policies.

Kubernetes Plugins

Every UKS cluster contains the additional services and configuration that empowers operators and users to fulfill different requirements for organizational and technical security.


Cilium is the CNI plugin used in UKS. It uses the modern eBPF facilities in newer Linux kernels to configure complex network data paths for communication of containers running on different nodes. The components of Cilium and their interactions are illustrated in the following image (source):

The most important elements are:

  • cilium-agent provides an API to interact with Cilium and updates the eBPF programs to reflect the current state of packet routing requirements.
  • cilium-cni integrates with the container runtime and is invoked whenever a container is scheduled or terminated. It interacts with the Cilium API to make it aware of those changes.
  • cli client can be used to interact with the Cilium API on the command line to inspect current state or make changes.

Within Kubernetes Cilium needs to store certain information in a global key-value store. By default Kubernetes Custom Resources are used but using a separate etcd cluster is also possible.

Using these elements Cilium is able to provide a mature set features, among them the following:

Package routing between nodes

A central task of Cilium is ensuring network connectivity between containers running on different nodes in the cluster. In UKS Cilium is run in encapsulation mode by default. In this mode Cilium uses the vxlan protocol to encapsulate network packets to and from containers in UDP packets that are sent directly between the involved nodes running those containers. The eBPF programs on the nodes then ensure that the encapsulated packet gets unwrapped and delivered to the correct container.

Using encapsulation simplifies deployments in cloud environments but is not the only routing mode supported by Cilium. It is also possible to disable tunnelling completely. In this mode the operator has to ensure that the network is capable of forwarding IP traffic using IPs given to containers and the Linux kernel on the nodes must be aware of how to forward packets addressed to container IPs.

Using this mode it is possible for operators to uniquely identify sources and targets of packets between containers on a IP level. This allows for observability and policy enforcement using the same tools that are already used for monitoring and controlling the rest of the network.

Transparent network encryption

UKS configures transparent encryption in Cilium using WireGuard. With this nodes create WireGuard tunnels between each other using encryption keys generated by Cilium and shared via the global key-value store. This means that all container traffic between nodes is encrypted and cannot be read by an attacker listening to traffic in the network. The encryption is transparent for the involved containers which means no special handling on the application layer is necessary.

Network Policies

Cilium supports enforcement of Kubernetes NetworkPolicies. The native Kubernetes network policies implement traffic control on OSI layer 3 or 4 within the cluster and look like this:

kind: NetworkPolicy
name: test-network-policy
namespace: default
role: db
- Ingress
- Egress
- from:
- ipBlock:
- namespaceSelector:
project: myproject
- podSelector:
role: frontend
- protocol: TCP
port: 6379
- to:
- ipBlock:
- protocol: TCP
port: 5978

Network policy can specify rules for ingress and egress. Ingress rules concern packets whose destination is the given container while egress rules are applied to packets sent by the given container. If there are no network policies defined for a container it is non-isolated and can send or receive packets without restrictions.

Ingress rules normally define from rules that describe which sources should be able to send packets to the given container. They can select packets by source IP, namespace of source container or pod of source container. Using the ports key packets can be restricted to certain UDP or TCP ports.

Egress rules can define network targets for packets sent by the container in the to key. The same specifications as in the ingress from rules are available. Allowed target ports for packets originating in the container can be specified in the ports key.

In addition to the default Kubernetes network policies Cilium supports CiliumNetworkPolicies that support additional filter semantics on OSI layers 3-7. With this it is possible for example to restrict HTTP connections to specific paths or methods. Depending on the use case or requirements it is possible to define complex rules for how containers within the cluster are able to communicate with each other or the outside world.

Cert Manager

When operating complex or public facing applications it is often necessary to also operate a public key infrastructure (PKI) that provides certificates and encryption keys to applications for securing the communication between them or between them and the application users. To ease the operational burden of managing such a PKI UKS provides cert-manager. cert-manager is a Kubernetes native application that simplifies creation and management of certificates. The following image illustrates its operation (source):

Overview over cert-manager operation (from: venafi-tppexample.comwww.example.comIssuer: letsencrypt-prodvenafi-as-a-servicesigned keypairvenafi-tpp

cert-manager can be configured to use one or more of the supported Issuers. Using this issuer it creates certificates and encryption keys according to the specified Certificate custom resources. It will also ensure that those certificates are renewed automatically a certain time before they expire. The actual certificate and key data is saved to Kubernetes Secrets and can then be used by containers within the cluster.

Available issuers are among others:

  • CA takes the certificate and private key of a CA and uses those to create and sign new certificates. The CA certificate and private key has to be added to the cluster by the operator.
  • ACME uses the ACME protocol to request certificates for a given domain. This can be used to request public-facing certificates from Let's Encrypt.


During cluster operation sensitive data like certificates and encryption keys need to be managed and made available to containers running in the cluster. Within Kubernetes such sensitive data is stored in Secrets. To improve security secret data is never written unencrypted to disk, instead Kubernetes is configured to use encryption-at-rest.

There are several different options for the algorithm used to encrypt the data, among them:

  • XSalsa20 and Poly1305
  • AES-GCM with random nonce
  • AES-CBC with PKCS#7 padding

The keys used for encryption can be generated during cluster deployment, provided by the user or requested from a supported KMS provider.

Cluster Extensions

In addition to the features present in every UKS installation there is a range of optional extensions that can be installed and that extend the feature set of the cluster even further.

Extended Policy Engine

Out of the box Kubernetes allows enforcement of different Pod Security Standards on a per-namespace level. If this is not enough because the three available policies are not granular enough or because application of policies should be more specific than just the namespace, an additional extended policy engine can be installed.

With this extended policy engine the following additional functionality is available:

  • policies as Kubernetes resources
  • validation, mutation, or generation of any resource
  • verification of container images for software supply chain security
  • inspection of image metadata
  • matching resources using label selectors and wildcards
  • validation and mutation using overlays (like Kustomize)
  • synchronization of configurations across namespaces
  • blocking of non-conformant resources using admission controls, or reporting policy violations
  • testing policies and validating resources

Intrusion Detection

Part of operating a production-grade cluster is monitoring the involved systems for malicious activity to detect and mitigate security breaches. Tools that support the operator with this task are called intrusion detection systems.

UKS can install an eBPF based intrusion detection system tailored for Kubernetes. With this tool it is possible to detect and react to security-significant events, such as

  • Process execution events
  • System call activity
  • I/O activity including network & file access

Because it is Kubernetes-aware it understands Kubernetes identities such as namespaces, pods and so-on. This means that security event detection can be configured in relation to individual workloads running in the cluster.

By leveraging eBPF it is able to apply policy and filtering directly in the kernel. So it performs the filtering, blocking, and reacting to events directly in the kernel instead of sending events to a user space agent. This drastically reduces observation overhead.

Additionally it can hook into any kernel function to instrument and trace all relevant syscalls or access kernel state. It can then join this kernel state with Kubernetes awareness or user policy to create rules enforced by the kernel in real time. This allows annotating and enforcing process namespace and capabilities, sockets to processes, process file descriptor to filenames and so on. For example, when an application changes its privileges a policy can trigger an alert or even kill the process before it has a chance to complete the syscall and potentially run additional syscalls.