calico-cni

The CNI plugin binary installer for Project Calico — installs the Calico and Calico-IPAM plugins onto each Kubernetes node at pod startup.

calico/node, calico/kube-controllers, calico/typha, calico/apiserver

What is calico-cni?

The calico-cni image provides the Container Network Interface (CNI) plugin binaries for Project Calico, the most widely adopted open-source networking and network security solution for Kubernetes. CNI is the standard Kubernetes extension point for pod networking — when a pod is scheduled on a node, Kubernetes calls the installed CNI plugin to wire up that pod's network interface and assign it an IP address.

The calico-cni image is not a long-running container. It runs as an init container as part of the calico-node DaemonSet, copying the calico and calico-ipam plugin binaries into the host's /opt/cni/bin/ directory so the kubelet can invoke them when pods are created or deleted. Once the binaries are installed, the init container exits. The actual enforcement of network policy and BGP routing is handled by the calico-node and calico-kube-controllers components — calico-cni is purely the on-node plugin interface between Kubernetes and the rest of the Calico stack.

It is relevant to platform engineers managing Kubernetes networking, and to security teams that need a hardened, CVE-free image at the very bottom of the pod lifecycle on every node.

How to use this image

Calico is deployed via the Tigera Operator, which manages the full lifecycle of all Calico component images including calico-cni. The recommended installation path uses Helm to deploy the operator, then an Installation CRD to configure the cluster.

Install the Tigera Operator via Helm:

helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm repo update
helm install calico projectcalico/tigera-operator \
  --namespace tigera-operator \
  --create-namespace

To use echo's calico-cni image, create an ImageSet CRD that pins all Calico component images to their echo registry digests, then reference it in the Installation resource:

apiVersion: operator.tigera.io/v1
kind: ImageSet
metadata:
  name: calico-v3.28.0
spec:
  images:
    - image: calico/cni
      digest: sha256:<digest-from-registry.echo.ai>
    - image: calico/node
      digest: sha256:<digest-from-registry.echo.ai>
    - image: calico/kube-controllers
      digest: sha256:<digest-from-registry.echo.ai>
---
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  registry: registry.echo.ai
  imagePath: calico
  imageSet: calico-v3.28.0
  calicoNetwork:
    ipPools:
      - cidr: 192.168.0.0/16
        encapsulation: VXLANCrossSubnet

Once applied, the operator deploys the calico-node DaemonSet across all nodes. On each node, the calico-cni init container runs first, installs the CNI binaries to the host, and exits — after which the main calico-node container starts and registers with the cluster. Node status transitions from NotReady to Ready as each node completes CNI installation.

Image variants

Published under calico/cni and mirrored at registry.echo.ai/calico/cni, the image is versioned in lockstep with the Calico platform:

  • calico/cni:v<version> — Version-pinned tags (e.g., v3.28.0) aligned with Calico releases. All Calico component images — cni, node, kube-controllers, typha — must run the same version. The ImageSet CRD enforces this when deploying via the Tigera Operator.
  • Architecture variants (-amd64, -arm64) are available for multi-arch clusters, though the operator selects the correct architecture automatically when using the manifest list tags.

Because calico-cni runs as an init container on every node and directly installs binaries onto the host filesystem, it is one of the highest-privilege images in a typical cluster and a meaningful CVE surface to keep clean.

Interested in base images that start and stay clean?