GOVPP - GO CLIENT FOR VPP BINARY API
GoVPP Architecture and Setup
GOVPPGoVPP (github.com/FDio/govpp) is the official Go library for VPP's binary API. It connects to VPP via a Unix socket or shared memory, sends request messages, and receives reply/notification messages. GoVPP auto-generates Go structs from VPP's .api.json files - so every VPP API is accessible with full type safety.
// ── go.mod setup ── // go get go.fd.io/govpp@latest package main import ( "context" "fmt" "log" "go.fd.io/govpp" "go.fd.io/govpp/api" "go.fd.io/govpp/binapi/interface_types" "go.fd.io/govpp/binapi/interfaces" "go.fd.io/govpp/binapi/ip" "go.fd.io/govpp/binapi/ip_types" "go.fd.io/govpp/core" ) func main() { // Connect to VPP binary API socket conn, err := govpp.Connect("/run/vpp/api.sock") if err != nil { log.Fatalf("connect: %v", err) } defer conn.Disconnect() // Open a channel - each goroutine should have its own channel ch, err := conn.NewAPIChannel() if err != nil { log.Fatalf("channel: %v", err) } defer ch.Close() // ── Example 1: Show VPP version ── req := &vpe.ShowVersion{} reply := &vpe.ShowVersionReply{} if err := ch.SendRequest(req).ReceiveReply(reply); err != nil { log.Fatalf("ShowVersion: %v", err) } fmt.Printf("VPP version: %s\n", reply.Version) }
Interface Operations
API PATTERNS// ── List all interfaces ── reqCtx := ch.SendMultiRequest(&interfaces.SwInterfaceDump{ SwIfIndex: interface_types.InterfaceIndex(^uint32(0)), // ~0 = all }) for { details := &interfaces.SwInterfaceDetails{} stop, err := reqCtx.ReceiveReply(details) if stop { break } if err != nil { log.Fatalf("recv: %v", err) } fmt.Printf(" [%d] %s admin:%v link:%v\n", details.SwIfIndex, details.InterfaceName, details.AdminUpDown, details.LinkUpDown) } // ── Set interface state up ── _, err = ch.SendRequest(&interfaces.SwInterfaceSetFlags{ SwIfIndex: interface_types.InterfaceIndex(swIfIndex), Flags: interface_types.IF_STATUS_API_FLAG_ADMIN_UP, }).ReceiveReply(&interfaces.SwInterfaceSetFlagsReply{}) // ── Add IPv4 address ── _, err = ch.SendRequest(&interfaces.SwInterfaceAddDelAddress{ SwIfIndex: interface_types.InterfaceIndex(swIfIndex), IsAdd: true, Prefix: ip_types.AddressWithPrefix{ Address: ip_types.Address{ Af: ip_types.ADDRESS_IP4, Un: ip_types.AddressUnionIP4(ip_types.IP4Address{10, 0, 0, 1}), }, Len: 24, }, }).ReceiveReply(&interfaces.SwInterfaceAddDelAddressReply{}) // ── Add a static route ── _, err = ch.SendRequest(&ip.IPRouteAddDel{ IsAdd: true, Route: ip.IPRoute{ TableID: 0, Prefix: ip_types.Prefix{ Address: ip_types.Address{ Af: ip_types.ADDRESS_IP4, Un: ip_types.AddressUnionIP4(ip_types.IP4Address{10, 1, 0, 0}), }, Len: 24, }, Paths: []ip.FibPath{{ SwIfIndex: interface_types.InterfaceIndex(swIfIndex), Proto: ip.FIB_API_PATH_NH_PROTO_IP4, Nh: ip.FibPathNh{ Address: ip_types.AddressUnionIP4( ip_types.IP4Address{10, 0, 0, 2}), }, Weight: 1, Preference: 0, }}, }, }).ReceiveReply(&ip.IPRouteAddDelReply{})
GOVPP - NOTIFICATIONS AND CHANNELS
Event Subscriptions and Multi-Channel Patterns
ADVANCEDSTATS API - HIGH-FREQUENCY TELEMETRY
VPP Stats Segment - Zero-Copy Telemetry
STATSThe Stats API is VPP's high-performance telemetry interface. It exposes per-node, per-interface, per-worker, and per-error counters via a shared memory segment - no IPC, no socket round-trip. A monitoring agent can read millions of counters per second without impacting the VPP dataplane.
// ── GoVPP Stats client ── import "go.fd.io/govpp/adapter/statsclient" func monitorVPP() { // Connect to stats segment (separate from binary API socket) client := statsclient.NewStatsClient("/run/vpp/stats.sock") if err := client.Connect(); err != nil { log.Fatalf("stats connect: %v", err) } defer client.Disconnect() // ── Poll interface counters ── ifCounters, err := client.GetInterfaceCounters() for _, ifc := range ifCounters { fmt.Printf("%-30s rx: %8d pkts %12d bytes tx: %8d pkts %12d bytes\n", ifc.InterfaceName, ifc.RxPackets, ifc.RxBytes, ifc.TxPackets, ifc.TxBytes) } // ── Poll per-node stats (show run equivalent) ── nodeCounters, err := client.GetNodeCounters() for _, nc := range nodeCounters { if nc.Calls == 0 { continue } fmt.Printf("%-40s calls:%8d vectors:%8d vecs/call:%.1f\n", nc.NodeName, nc.Calls, nc.Vectors, float64(nc.Vectors)/float64(nc.Calls)) } // ── Poll error counters (show error equivalent) ── errCounters, err := client.GetErrorCounters() for _, ec := range errCounters { if ec.Value == 0 { continue } fmt.Printf("%-50s %d\n", ec.CounterName, ec.Value) } // ── Continuous monitoring loop ── ticker := time.NewTicker(1 * time.Second) for range ticker.C { // Stats segment uses version counter for consistency // GetInterfaceCounters handles the epoch check internally ifc, _ := client.GetInterfaceCounters() exportMetrics(ifc) // Prometheus, InfluxDB, etc. } }
💡 Stats segment vs binary API for telemetry: The Stats API reads from shared memory - it costs ~1 microsecond per read. The binary API requires a socket round-trip - ~50–100 microseconds. For polling counters at 1Hz or faster, always use the Stats API. Use the binary API only for configuration operations (add route, set interface state).
VPP_PAPI - PYTHON BINDINGS
vpp_papi - Scripting and Automation
PYTHONvpp_papi (src/vpp-api/python/vpp_papi/) provides Python bindings for VPP's binary API. It is the same library used by VPP's Python test framework. Use it for automation scripts, management integrations, and quick prototyping.
from vpp_papi import VPPApiClient import socket # Connect to VPP vpp = VPPApiClient(apifiles=["/usr/share/vpp/api/core/"], server_address="/run/vpp/api.sock") vpp.connect("my-python-agent") # ── Show version ── rv = vpp.api.show_version() print(f"VPP: {rv.version}") # ── List interfaces ── for intf in vpp.api.sw_interface_dump(): print(f" [{intf.sw_if_index}] {intf.interface_name.rstrip(chr(0))} " f"link={'up' if intf.link_up_down else 'down'}") # ── Create a TAP interface ── rv = vpp.api.tap_create_v3( id=0, host_if_name_set=True, host_if_name=b"vpp0\x00", host_ip4_prefix_set=True, host_ip4_prefix={ "address": {"af": "ADDRESS_IP4", "un": {"ip4": socket.inet_aton("10.10.0.2")}}, "len": 30 } ) print(f"TAP created: sw_if_index={rv.sw_if_index}") # ── Add an IP route ── vpp.api.ip_route_add_del( is_add=True, route={ "prefix": {"address": {"af": "ADDRESS_IP4", "un": {"ip4": socket.inet_aton("10.1.0.0")}}, "len": 24}, "paths": [{"sw_if_index": rv.sw_if_index, "proto": "FIB_API_PATH_NH_PROTO_IP4", "nh": {"address": {"ip4": socket.inet_aton("10.10.0.1")}}, "weight": 1, "preference": 0}] } ) # ── Subscribe to interface events ── @vpp.register_event_callback def on_interface_event(msg_name, msg): if msg_name == "sw_interface_event": print(f"Interface {msg.sw_if_index} link {'up' if msg.link_up_down else 'down'}") vpp.api.want_interface_events(enable_disable=1, pid=0) vpp.disconnect()
PERFORMANCE TUNING AND PRODUCTION PATTERNS
NUMA Awareness and CPU Topology
PERFORMANCEVPP performance is highly sensitive to NUMA placement. Accessing memory across NUMA nodes adds ~100ns latency and reduces throughput by 30–50%. The goal is to keep NIC, hugepages, CPU cores, and worker threads all on the same NUMA node.
# Step 1: Find which NUMA node your Mellanox NIC is on cat /sys/bus/pci/devices/0000:03:00.0/numa_node # e.g. output: 0 → NUMA 0 # Step 2: Find NUMA-local CPU cores lscpu | grep -A5 "NUMA node0" # e.g. NUMA node0 CPU(s): 0-11,24-35 # Step 3: Configure startup.conf to use NUMA-local cores cpu { main-core 0 # core 0 on NUMA 0 corelist-workers 2-5 # cores 2-5 on NUMA 0 } dpdk { socket-mem 4096,0 # 4GB on NUMA 0, 0 on NUMA 1 } buffers { buffers-per-numa 262144 # 256K buffers on NUMA 0 } # Step 4: Verify with VPP # vppctl: show interface rx-placement # Verify each queue is on the worker thread whose core is NUMA-local to the NIC
|
GoVPP Control Plane Agent
Objective: Build a Go agent that manages a VPP instance - configures interfaces, programs routes, polls stats, and exposes a REST API for a management frontend.
Connect(socketPath string) that establishes GoVPP connection and opens a pool of channels (one per goroutine). Handle reconnect with exponential backoff on disconnect.ConfigureInterface(name string, ip string, prefix int): list interfaces, find by name, set admin-up, add IP address. Return error if interface not found.ProgramRoutes(routes []Route): batch-program a list of static routes using a dedicated goroutine + channel. Measure time to program 1000 routes and report routes/second.:9090/metrics.SwInterfaceEvent, log all link state changes with timestamp. Test by toggling an interface up/down via vppctl and verifying the agent logs the event.GET /interfaces returns JSON list of all VPP interfaces with counters. POST /routes programs a new route. DELETE /routes/{prefix} removes it. Test with curl.End-to-End Production Topology
Objective: Integrate all phases into a complete topology: VPP + DPDK physical ports + memif container connections + linux-cp for control plane + GoVPP management agent + observability.
show error), stable vectors/call for dpdk-input (32–256), free buffer% stays above 30%, FRR OSPF adjacency stays up throughout.P5 COMPLETION CHECKLIST
- Can connect to VPP from Go using GoVPP, open channels, and send request/reply messages
- Know the multi-channel pattern: one channel per goroutine, no sharing
- Can implement interface dump, set interface state, add IP address in Go
- Can implement IP route add/delete with correct ip_types structures
- Can subscribe to VPP events (want_interface_events) and handle them in a goroutine
- Understand the Stats API architecture: shared memory, zero IPC cost
- Can connect to Stats segment and poll interface, node, and error counters
- Know when to use Stats API vs binary API (telemetry vs configuration)
- Can write a vpp_papi Python script: connect, API call, event subscription
- Know the 7 key NUMA/performance tuning areas and the CLI to verify each
- Understand workers=queues constraint and how to size buffers-per-numa
- Completed Project 8 (GoVPP agent with Prometheus) and Project 9 (full production topology)
🎉 Phase 5 complete. You can now build production VPP deployments end-to-end: from DPDK physical interfaces through custom plugins to a fully automated GoVPP control plane with observability. Bonus: continue to the Host Stack module to explore VPP's TCP/Session layer, VCL, and application namespaces.