Kubernetes Meets the Data Center: Cilium BGP, FRR Leaf-Spine Fabric, and Role-Based L3/L7 Policy Enforcement

I built a leaf-spine data center fabric, connected it to Kubernetes with Cilium BGP, exposed services through LoadBalancer VIPs, enforced L3/L7 policy, and traced the end-to-end data path with Hubble, from client request to pod response.
Kubernetes Meets the Data Center: Cilium BGP, FRR Leaf-Spine Fabric, and Role-Based L3/L7 Policy Enforcement

On this page

I built a complete leaf-spine data center fabric, wired it into a Kubernetes cluster running Cilium with native BGP routing, exposed services through LoadBalancer VIPs with L2 announcements, terminated TLS at the edge, enforced role-based access control with L3/L7 CiliumNetworkPolicy, and then watched the whole thing through Hubble's real-time flow observatory.

None of this is theoretical. Every configuration in this post was deployed, tested, broken, debugged, and verified. The lab runs on Docker Desktop with kind, FRR for the fabric, and Cilium 1.17.1 as the CNI. No cloud provider. No managed control plane. Just containers, BGP sessions, and eBPF programs doing the work.

Whether you're a network engineer wondering how Kubernetes actually integrates with your routing fabric, a platform engineer designing service exposure strategies, or a DevOps engineer trying to understand how Cilium enforces policy at L3 and L7 at the same time, this post walks you through the entire stack, layer by layer, with real output from real tests.

The Topology

The interactive diagram below shows the complete lab architecture. Toggle each layer to focus on specific aspects of the design:

Spine tier Spine-1 AS 65001 Spine-2 AS 65002 S1: .1.0/29 S1: .1.8/29 S1: .1.16/29 S1: .1.24/29 S2: .2.0/29 S2: .2.8/29 S2: .2.16/29 S2: .2.24/29 Leaf tier Leaf-1 AS 65011 Leaf-2 AS 65012 Leaf-3 AS 65013 Leaf-4 AS 65014 172.16.10.0/29 172.16.10.8/29 .10.16/29 .10.24/29 K8s nodes (Cilium 1.17.1) Control plane AS 65020 Worker-1 nginx Worker-2 nginx Worker-3 nginx 10.244.0.0/24 10.244.x.0/24 10.244.x.0/24 10.244.x.0/24 kind Docker bridge (172.20.0.0/16) external-client (netshoot) VIP .201 (leaf2) VIP .202 (leaf3) VIP .203 (leaf4) CiliumLoadBalancerIPPool 172.20.255.200/29 + L2 announcement Policy enforcement (CiliumNetworkPolicy L3 + L7) admin-client full access → GET /.* (all paths) dev-client dev access → GET /, /api.*, /health public-client public only → GET /, /health blocked-client L3 denied → no fromCIDR → default deny → DROP 200 = allowed 403 = Envoy L7 deny DROP = L3 silent drop Path matrix: / /api /admin /metrics /health admin: 200 200 200 200 200 dev: 200 200 403 403 200 public: 200 403 403 403 200 block: DROP DROP DROP DROP DROP Cilium 1.17.1 | native routing | eBGP | L2 announcements | BPF masquerade | Envoy L7 | Hubble UI :12000

This post is for subscribers only

Subscribe to LevelUp I.T. newsletter and stay updated.

Don't miss anything. Get all the latest posts delivered straight to your inbox. It's free!
Great! Check your inbox and click the link to confirm your subscription.
Error! Please enter a valid email address!