Kubernetes Meets the Data Center: Cilium BGP, FRR Leaf-Spine Fabric, and Role-Based L3/L7 Policy Enforcement
I built a leaf-spine data center fabric, connected it to Kubernetes with Cilium BGP, exposed services through LoadBalancer VIPs, enforced L3/L7 policy, and traced the end-to-end data path with Hubble, from client request to pod response.
I built a complete leaf-spine data center fabric, wired it into a Kubernetes cluster running Cilium with native BGP routing, exposed services through LoadBalancer VIPs with L2 announcements, terminated TLS at the edge, enforced role-based access control with L3/L7 CiliumNetworkPolicy, and then watched the whole thing through Hubble's real-time flow observatory.
None of this is theoretical. Every configuration in this post was deployed, tested, broken, debugged, and verified. The lab runs on Docker Desktop with kind, FRR for the fabric, and Cilium 1.17.1 as the CNI. No cloud provider. No managed control plane. Just containers, BGP sessions, and eBPF programs doing the work.
Whether you're a network engineer wondering how Kubernetes actually integrates with your routing fabric, a platform engineer designing service exposure strategies, or a DevOps engineer trying to understand how Cilium enforces policy at L3 and L7 at the same time, this post walks you through the entire stack, layer by layer, with real output from real tests.
The Topology
The interactive diagram below shows the complete lab architecture. Toggle each layer to focus on specific aspects of the design: