Removing Cilium from a k3s cluster is not as simple as deleting the Helm release or ArgoCD application. Cilium installs BPF programs pinned to the kernel, iptables chains, CNI config, and virtual interfaces — none of which go away when the pods are deleted. If you skip the cleanup step, the residue will break routing on every node.
This is the full removal sequence, including the privileged DaemonSet that does the actual cleanup work.
Before you start
Check your Cilium image version — you’ll need it for the cleanup DaemonSet:
kubectl get ds cilium -n kube-system -o jsonpath='{.spec.template.spec.containers[0].image}'
1. Scale down ArgoCD
If you’re using ArgoCD, stop it first. Otherwise it will fight you mid-removal by recreating deleted resources.
kubectl scale deployment argocd-applicationset-controller argocd-dex-server \
argocd-notifications-controller argocd-redis argocd-repo-server argocd-server \
-n argocd --replicas=0
kubectl scale statefulset argocd-application-controller -n argocd --replicas=0
Skip this step if you’re not using ArgoCD.
2. Delete Cilium workloads
kubectl delete application cilium -n argocd --ignore-not-found
kubectl delete ds cilium cilium-envoy -n kube-system --ignore-not-found
kubectl delete deploy cilium-operator hubble-relay hubble-ui -n kube-system --ignore-not-found
kubectl delete svc cilium-envoy hubble-peer hubble-relay hubble-ui -n kube-system --ignore-not-found
kubectl delete sa cilium cilium-operator hubble-relay hubble-ui -n kube-system --ignore-not-found
3. Clean up BPF state on every node
This is the step most guides miss. The cilium-dbg post-uninstall-cleanup command removes BPF maps and programs, iptables rules, CNI config, and virtual interfaces from the node. It needs to run on every node simultaneously, which makes a privileged DaemonSet the right tool.
The initContainer pattern is important: the pod only transitions to Running after cleanup finishes, which means kubectl rollout status gives you an accurate picture of progress across all nodes.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium-cleanup
namespace: kube-system
spec:
selector:
matchLabels:
app: cilium-cleanup
template:
metadata:
labels:
app: cilium-cleanup
spec:
hostNetwork: true
hostPID: true
tolerations:
- operator: Exists
initContainers:
- name: cleanup
image: quay.io/cilium/cilium:v1.19.3 # use the version from step 0
command: ["/bin/sh", "-c"]
args:
- |
cilium-dbg post-uninstall-cleanup --all-state -f
echo "Done on $(hostname)"
securityContext:
privileged: true
volumeMounts:
- name: bpf
mountPath: /sys/fs/bpf
- name: cni-conf
mountPath: /etc/cni/net.d
- name: cni-bin
mountPath: /opt/cni/bin
containers:
- name: done
image: busybox
command: ["sh", "-c", "sleep infinity"]
volumes:
- name: bpf
hostPath:
path: /sys/fs/bpf
- name: cni-conf
hostPath:
path: /etc/cni/net.d
- name: cni-bin
hostPath:
path: /opt/cni/bin
Apply it and wait:
kubectl rollout status ds/cilium-cleanup -n kube-system
kubectl logs -n kube-system -l app=cilium-cleanup -c cleanup --prefix
4. Delete the cleanup DaemonSet, CRDs, and namespace
kubectl delete ds cilium-cleanup -n kube-system
kubectl delete crd $(kubectl get crd | grep cilium | awk '{print $1}')
kubectl delete namespace cilium-secrets
5. Remove from git
Delete the Cilium Application manifest and values file from your repo and push. ArgoCD drives entirely from git — if the manifests are gone, it won’t recreate them on startup.
6. Bring ArgoCD back up
kubectl scale deployment argocd-applicationset-controller argocd-dex-server \
argocd-notifications-controller argocd-redis argocd-repo-server argocd-server \
-n argocd --replicas=1
kubectl scale statefulset argocd-application-controller -n argocd --replicas=1
kubectl rollout status deployment/argocd-server -n argocd
Verify Cilium is gone from the application list:
kubectl get applications -n argocd
k3s defaults to Flannel as its CNI, so once Cilium’s state is fully cleared, Flannel takes over automatically.