THE UNIVERSAL INTERFACE HANDLE
sw_if_index - VPP's Interface Abstraction
CORE CONCEPTEvery interface in VPP - DPDK physical port, memif, TAP, loopback, VLAN sub-interface - is represented by a single u32 software interface index. Graph nodes never deal with concrete interface types; they always refer to interfaces by sw_if_index.
There are two levels of interface index:
- hw_if_index - hardware interface: corresponds to a physical device or PMD (e.g., the DPDK port). One per physical NIC port.
- sw_if_index - software interface: can be the base interface OR a sub-interface (VLAN, QinQ). Multiple sw_if_index values can share one hw_if_index.
/* Get sw_if_index from a received packet */ u32 sw_if_index = vnet_buffer(b)->sw_if_index[VLIB_RX]; /* Get sw_if_index by name (for CLI/API handlers) */ vnet_main_t *vnm = vnet_get_main(); u32 sw_if_index = vnet_sw_interface_find_by_name(vnm, "GigabitEthernet0/8/0"); /* Get interface details */ vnet_sw_interface_t *sw = vnet_get_sw_interface(vnm, sw_if_index); vnet_hw_interface_t *hw = vnet_get_hw_interface(vnm, sw->hw_if_index); /* Set interface admin state */ vnet_sw_interface_set_flags(vnm, sw_if_index, VNET_SW_INTERFACE_FLAG_ADMIN_UP); /* Assign IP address programmatically (from a plugin) */ ip4_add_del_interface_address_args_t a = { .sw_if_index = sw_if_index, .address = { .as_u32 = addr }, .address_length = 24, .is_add = 1, }; vl_api_ip4_add_del_interface_address_t_handler(&a);
- sw_if_index ≈ port_id in DPDK - but sw_if_index is virtual and can represent logical interfaces above the physical device
- In DPDK you call
rte_eth_rx_burst(port_id, queue_id, ...). In VPP the dpdk-input node calls it internally and stampsvnet_buffer(b)->sw_if_index[VLIB_RX] = sw_if_index - Sub-interfaces are transparent - a VLAN tag on
sw_if_index=3may resolve tosw_if_index=5after L2 classification, without any code change in your L3 node
COMPOSABLE PACKET PIPELINES
Feature Arcs - What They Are
src/vnet/feature/A feature arc is a per-interface ordered list of processing nodes that a packet traverses before the main routing/forwarding node. Features are registered at compile time, enabled per-interface at runtime via CLI or API. They are VPP's mechanism for composable, modular packet processing.
Packet arrives at ip4-input
│
▼
[ip4-unicast arc - per interface, in priority order]
┌─────────────────────────────────────────────────┐
│ feature: ip4-full-reassembly (priority 50) │
│ feature: acl-plugin-in-ip4-fa (priority 40) │
│ feature: nat44-in2out (priority 30) │
│ feature: your-custom-node (priority 20) ←──── inserted by you
└─────────────────────────────────────────────────┘
│
▼
ip4-lookup (main forwarding - arc terminal)The framework calls vnet_feature_next() at the end of each feature node to advance to the next registered feature, or to the terminal node if none remain. Packets skip disabled features automatically - zero overhead per disabled feature.
Registering Your Node in an Arc
PATTERN/* In your plugin .c file: register as a feature in ip4-unicast arc */ VNET_FEATURE_INIT (my_feature, static) = { .arc_name = "ip4-unicast", /* arc to join */ .node_name = "my-feature-node", /* your node name */ .runs_before = VNET_FEATURES("ip4-lookup"), /* ordering constraint */ .runs_after = VNET_FEATURES("ip4-full-reassembly-feature"), }; /* In your node function: advance to next feature when done */ static uword my_feature_fn(...) { u32 next_index; u32 bi0 = from[0]; vlib_buffer_t *b0 = vlib_get_buffer(vm, bi0); /* Determine next feature in arc (not a hard-coded node name!) */ vnet_feature_next(&next_index, b0); /* reads current_config_index */ /* OR: early exit - bypass remaining features and go direct to drop */ next_index = VNET_FEATURE_ARC_DROP_INDEX; vlib_buffer_enqueue_to_next(vm, node, from, &next_index, 1); return 1; } /* Enable per interface via CLI */ /* set interface feature GigabitEthernet0/8/0 my-feature-node ip4-unicast enable */ /* Enable via API (from GoVPP or Python) */ /* feature_enable_disable { sw_if_index, arc_name, feature_name, enable=1 } */
Key arcs you will use:
| Arc Name | Terminal Node | Trigger |
|---|---|---|
ip4-unicast | ip4-lookup | IPv4 unicast inbound per interface |
ip4-multicast | ip4-mfib-forward-lookup | IPv4 multicast inbound |
ip4-output | ip4-rewrite | IPv4 outbound (post FIB, pre TX) |
ip6-unicast | ip6-lookup | IPv6 unicast inbound |
ethernet-output | interface-output | L2 output processing |
FORWARDING INFORMATION BASE
FIB Architecture - Prefix → DPO Chain
src/vnet/fib/VPP's FIB is a recursive, multi-path forwarding database. It maps IP prefixes to Data Path Objects (DPOs) - a polymorphic chain of forwarding instructions. Understanding FIB is essential for writing plugins that affect routing.
/* FIB entry structure (simplified) */ /* Prefix: 10.0.0.0/8 → [ECMP DPO → [adj_A, adj_B]] */ /* Prefix: 0.0.0.0/0 → [Drop DPO] */ /* Prefix: 1.2.3.4/32 → [Receive DPO] (local address) */ /* Add a route programmatically from a plugin */ fib_prefix_t pfx = { .fp_len = 24, .fp_proto = FIB_PROTOCOL_IP4, .fp_addr = { .ip4 = { .as_u32 = clib_host_to_net_u32(0x0a000100) } }, }; fib_route_path_t rpath = { .frp_proto = DPO_PROTO_IP4, .frp_addr = next_hop_addr, .frp_sw_if_index = sw_if_index, .frp_weight = 1, }; fib_table_entry_path_add(0, /* FIB table 0 = default */ &pfx, FIB_SOURCE_PLUGIN_LOW, FIB_ENTRY_FLAG_NONE, &rpath, 1); /* Lookup in FIB (from a graph node) */ fib_node_index_t fei = fib_table_lookup(fib_index, &pfx); load_balance_t *lb = load_balance_get( fib_entry_get_dpo_index(fei, FIB_FORW_CHAIN_TYPE_UNICAST_IP4)); /* The normal path: ip4-lookup does this automatically */ /* You rarely need to call fib_table_lookup directly from a node */
DPO - Data Path Objects
FORWARDING CHAINA DPO is a polymorphic forwarding object. Every FIB entry resolves to a DPO chain. Key DPO types:
| DPO Type | Meaning | Next Node |
|---|---|---|
DPO_ADJACENCY | Rewrite header + send to output interface | ip4-rewrite |
DPO_ADJACENCY_GLEAN | Trigger ARP for unknown next-hop | arp-input-glean |
DPO_RECEIVE | Packet destined for VPP itself | ip4-local |
DPO_DROP | Discard packet | error-drop |
DPO_LOAD_BALANCE | ECMP - select one of N adjacencies | selected child DPO |
DPO_MPLS_LABEL | Push MPLS label and forward | mpls-output |
DPO_PUNT | Send to control plane via punt socket | punt-dispatch |
You can register your own DPO type with dpo_register() to intercept traffic and redirect it through a custom graph node. This is the correct mechanism for tunnel encapsulation, policy routing, and SRv6.
💡 Most plugin authors never touch the FIB directly. The typical pattern is: register a feature arc node to intercept inbound packets, do your processing, and call vnet_feature_next() to continue normal forwarding. Only plugins that add new route types (tunnels, SRv6, custom DPOs) need to interact with the FIB API.
ARP AND NEIGHBOUR RESOLUTION
How ARP Works in VPP
src/vnet/arp/VPP's ARP is entirely in the dataplane. When ip4-lookup resolves a route to a DPO_ADJACENCY_GLEAN, it punts the packet to arp-input-glean, which queues the packet and sends an ARP request. When the ARP reply arrives, arp-reply updates the adjacency table, and queued packets are re-forwarded.
/* Manually add a static ARP entry */ vnet_set_ip4_ethernet_arp(NULL, /* main thread */ sw_if_index, &ip4_addr, mac_addr, 1, /* is_static */ 0); /* is_no */ /* Show ARP table: vppctl> show ip neighbors */ /* Walk ARP entries programmatically */ ip4_neighbor_walk(sw_if_index, my_cb_fn, my_arg);
Important: ARP processing is slow-path. Production deployments use static ARP entries for known peers (e.g., testpmd containers) to avoid ARP-generated glean drops at startup. In your mini-projects, add static ARP entries for container-to-container communication.
L2 BRIDGING AND SWITCHING
Bridge Domains - L2 Forwarding
src/vnet/l2/VPP supports full L2 bridging. Interfaces placed in the same bridge domain behave as ports on the same switch. The bridge domain handles MAC learning, flooding, and forwarding without involving the L3 FIB.
/* Create bridge domain 1 and add two interfaces */ /* vppctl> set interface l2 bridge GigabitEthernet0/8/0 1 */ /* vppctl> set interface l2 bridge memif0/0 1 */ /* Programmatic: create bridge domain */ l2_bridge_domain_add_del_args_t a = { .bd_id = 1, .flood = 1, .uu_flood = 1, .forward = 1, .learn = 1, .arp_term = 0, .mac_age = 300, /* MAC aging: 300 seconds */ .is_add = 1, }; bd_add_del(&a); /* Add interface to bridge domain */ set_int_l2_mode(vm, vnm, MODE_L2_BRIDGE, sw_if_index, 1, /* bd_id */ L2_BD_PORT_TYPE_NORMAL, 0, 0); /* Show L2 MAC table */ /* vppctl> show l2fib */ /* vppctl> show bridge-domain 1 */
Bridge domains are heavily used in the mini-projects - the memif vSwitch (Project 5) uses a bridge domain to connect multiple container VPP instances via memif interfaces.
P2C COMPLETION CHECKLIST
- Know the difference between hw_if_index and sw_if_index; know when each is used
- Can retrieve sw_if_index from a buffer, by name, and by walking the interface table
- Understand feature arcs: what they are, how ordering works, and how to register a node in an arc
- Can implement a feature arc node using
VNET_FEATURE_INITandvnet_feature_next() - Understand FIB prefix resolution to DPO chains; know the key DPO types and their next nodes
- Can add and delete FIB routes programmatically using
fib_table_entry_path_add - Know how ARP works in VPP and how to add static ARP entries
- Can create a bridge domain and add interfaces to it
- Know the key vnet arcs: ip4-unicast, ip4-output, ip6-unicast, ethernet-output
✅ Phase 2 complete. You now understand all three vpp layers from the ground up. Next: Phase 3 - Interface Technologies. Start with the DPDK plugin - it's the most familiar given your background.